Jiashan Wang; Iterative Re-weighted Linear Least Squares for Exact Penalty Subproblems on Product Sets

March 10, 2015, 4:00pm
GUG 204
Jiashan Wang, Department of Mathematics, UW.
Iterative Re-weighted Linear Least Squares for Exact Penalty Subproblems on Product Sets

Abstract: We present two matrix-free methods for solving exact penalty subproblems on product sets that arise when solving large-scale optimization problems. The first approach is a novel iterative re-weighting algorithm (IRWA), which iteratively minimizes quadratic models of relaxed subproblems while automatically updating a relaxation vector. The second approach is based on alternating direction augmented Lagrangian (ADAL) technology applied to our setting. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions which can be performed matrix-free. We prove that both algorithms are globally convergent under loose assumptions, and that each requires at most $O(1/\varepsilon^2)$ iterations to reach $\varepsilon$-optimality of the objective function. Numerical experiments exhibit the ability of both algorithms to efficiently find inexact solutions. However, in certain cases, these experiments indicate that IRWA can be significantly more efficient than ADAL.

Joint work with James Burke, Frank Curtis, and Hao Wang.

Emily Fox; Leveraging Optimization Techniques to Scale Bayesian Inference

February 24, 2015, 4:00pm
GUG 204
Emily Fox, Department of Statistics, UW.
Leveraging Optimization Techniques to Scale Bayesian Inference

Abstract: Data streams of increasing complexity are being collected in a variety of fields ranging from neuroscience, genomics, and environmental monitoring to e-commerce based on technologies and infrastructures previously unavailable. With the advent of Markov chain Monte Carlo (MCMC) combined with the computational power to implement such algorithms, deploying increasingly expressive models has been a focus in recent decades. Unfortunately, traditional algorithms for Bayesian inference in these models such as MCMC and variational inference do not typically scale to the large datasets encountered in practice. Likewise, these algorithms are not applicable to the increasingly common situation where an unbounded amount of data arrive as a stream and inferences need to be made on-the-fly. In this talk, we will present a series of algorithms— stochastic gradient Hamiltonian Monte Carlo, HMM stochastic variational inference, and streaming Bayesian nonparametric inference— to address various aspects of the challenge in scaling Bayesian inference; our algorithms focus on deploying stochastic gradients and working within an optimization framework. We demonstrate our methods on a variety of applications including online movie recommendations, segmenting a human chromatin data set with 250 million observations, and clustering a stream of New York Times documents.

Joint work with Tianqi Chen, Nick Foti, Dillon Laird, Alex Tank, and Jason Xu.

Hon Leung Lee; Minimizing Distance to an Orthogonally Invariant Matrix Set

January 27, 2015, 4:00pm
GUG 204
Hon Leung Lee, Depart­ments of Mathematics, UW.
Minimizing Distance to an Orthogonally Invariant Matrix Set

Abstract: The problem of finding the closest point in a set, with respect to Euclidean distance, arises frequently in applications. In this talk we focus on orthogonally invariant matrix sets and provide a framework for computing and counting the real smooth critical points of this minimization problem, as well as finding the minimizers. The key results are the “transfer principles” that allow calculations to be done in the Euclidean space containing the vectors of singular values of the members in the matrix set. Running examples include Eckart-Young theorem and finding the nearest essential matrix. This is a joint work with Dmitriy Drusvyatskiy and Rekha Thomas.

James Renegar; Extending the Applicability of Efficient First-Order Methods for Convex Optimization

CORE Series
January 13, 2014, 4:30pm
EEB 125
James Renegar, Operations Research and Information Engineering, Cornell Uni­ver­sity.
Extending the Applicability of Efficient First-Order Methods for Convex Optimization

Abstract: The study of first-order methods has largely dominated research in continuous optimization for the last decade, yet still the range of problems for which efficient and easily-applicable first-order methods have been developed is surprisingly limited, even though much has been achieved in some areas with high profile, such as compressed sensing.

We present a simple transformation of any linear or semidefinite (or hyperbolic) program into an equivalent convex optimization problem whose only constraints are linear equations. The objective function is defined on the whole space, making virtually all subgradient methods be immediately applicable. We observe, moreover, that the objective function is naturally smoothed, thereby allowing most first-order methods to be applied.

We develop complexity bounds in the unsmoothed case for a particular subgradient method, and in the smoothed case for Nesterov’s original optimal first-order method for smooth functions. We achieve the desired bounds on the number of iterations.

Perhaps most surprising is that the transformation is simple and so is the basic theory, and yet the approach has been overlooked until now, a blind spot.

Bio: James Renegar has been a Professor in the School of Operations Research and Information Engineering at Cornell University since 1987, and has served as its Director (2004-2009). His research interests lie in continuous optimization, and more broadly, in the interplay between algorithms, geometry, and algebraic techniques. Professor Renegar has made fundamental contributions to the design and analysis of interior point methods, error analysis in convex optimization, and numeric and algebraic computational complexity. He was one of the five founding members of the Society for Foundations of Computational Mathematics, and has long been on its board of directors. He was the chair of the workshop committee at the last meeting of the society (2014). Professor Renegar was a Semi-Plenary speaker at the 17th International Symposium on Mathematical Programming (2000) and an invited speaker at the International Congress of Mathematicians (1990). In addition, he has published an influential book on the mathematics of interior point methods (SIAM 2001) and has received numerous teaching awards.

Winter 2015 Calendar

January 13 [CORE]
James Renegar, School of Operations Research and Information Engineering, Cornell.
Extending the Applicability of Efficient First-Order Methods for Convex Optimization

January 15 [CORE, CSE]
Jon Kleinberg, Depart­ments of Com­puter Sci­ence and Infor­ma­tion Sci­ence, Cornell.
Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

January 27
Hon Leung Lee, Department of Mathematics, UW.
Minimizing Distance to an Orthogonally Invariant Matrix Set

February 24
Emily Fox, Department of Statistics, UW.
Lever­ag­ing Opti­miza­tion Tech­niques to Scale Bayesian Inference

February 27 [MAC]
Jeannette Wing, Corporate Vice President of Microsoft Research.
Computational Thinking

March 10
Jiashan Wang, Department of Mathematics, UW.
Iterative Re-weighted Linear Least Squares for Exact Penalty Subproblems on Product Sets

Click on the title or see below for the abstracts.