Category Archives: Uncategorized

John Duchi; Composite modeling and optimization, with applications to phase retrieval and nonlinear observation modeling

CORE Series
Friday, Oct 20, 2017
SMI 211, 3:30PM 
John DuchiStanford University (Departments of Statistics and Electrical Engineering)

TITLE: Composite modeling and optimization, with applications to phase retrieval and nonlinear observation modeling

ABSTRACT: We consider minimization of stochastic functionals that are compositions of a (potentially) non-smooth convex function h and smooth function c. We develop two stochastic methods–a stochastic prox-linear algorithm and a stochastic (generalized) sub-gradient procedure–and prove that, under mild technical conditions, each converges to first-order stationary points of the stochastic objective. Additionally, we analyze this problem class in the context of phase retrieval and more generic nonlinear modeling problems, showing that we can solve these problems (even with faulty measurements) with extremely high probability under appropriate random measurement models. We provide substantial experiments investigating our methods, indicating the practical effectiveness of the procedures.

 

BIO: John Duchi is an assistant professor of Statistics and Electrical Engineering and (by courtesy) Computer Science at Stanford University, with graduate degrees from UC Berkeleyand undergraduate degrees from Stanford. His work focuses on large scale optimization problems arising out of statistical and machine learning problems, robustness and uncertain data problems, and information theoretic aspects of statistical learning. He has won a number of awards and fellowships, including a best paper award at the International Conference onMachine Learning, an NSF CAREER award, and a Sloan Fellowship in Mathematics.

Scott Roy; An Optimal First-order Method Based on Optimal Quadratic Averaging

May 1, 2017, 4pm
PDL C-401
Scott Roy, Department of Mathematics, University of Washington

Abstract: In a recent paper, Bubeck, Lee, and Singh introduced a new first order method for minimizing smooth strongly convex functions. Their geometric descent algorithm, largely inspired by the ellipsoid method, enjoys the optimal linear rate of convergence. We show that the same iterate sequence is generated by a scheme that in each iteration computes an optimal average of quadratic lower-models of the function. Indeed, the minimum of the averaged quadratic approaches the true minimum at an optimal rate. This intuitive viewpoint reveals clear connections to the original fast-gradient methods and cutting plane ideas, and leads to limited-memory extensions with improved performance.

Joint work with Dmitriy Drusvyatskiy and Maryam Fazel.

Andrew Pryhuber; A QCQP Approach for Triangulation

Apr 25, 2017, 4pm
PDL C-401
Andrew Pryhuber, Department of Mathematics, University of Washington

Abstract: Reconstruction of a 3D world point from $n\geq 2$ noisy 2D images is referred to as the triangulation problem and is fundamental in multi-view geometry. We show how this problem can be formulated as a quadratically constrained quadratic program and discuss an algorithm to construct candidate solutions. We also present a polynomial time test motivated by the underlying geometry of the triangulation problem to confirm optimality of such a solution. Based on work by Chris Aholt, Sameer Agarwal, and Rekha Thomas.

Tristan van Leeuwen; A penalty method for PDE-constrained optimization in inverse problems

Feb 23, 2016, 4pm
SIG 226
Tristan van Leeuwen,, Utrecht University

Abstract: Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal, then, is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the paramaters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method reduces some of the non-linearity of the problem and is less sensitive the initial iterate.

Jane Ye; On solving bilevel programming problems

Feb 9, 2016, 4pm
SIG 226
Jane Ye,, University of Victoria, Canada.

Abstract:A bilevel program is a sequence of two optimization problems where the constraint region of the upper level problem is determined implicitly by the solution set to the lower level problem. In this talk we report some recent progresses on solving bilevel programming problems. In the case where all functions are polynomials, we propose a numerical method for globally solving the polynomial bilevel program. For the simple bilevel program where the lower level constraints are independent of the upper level variables we propose a numerical method for finding a stationary point.

Winter 2016 Calendar

January 12
James Davis, University of Illinois at Urbana-Champaign
Customer Choice Models and Assortment Optimization

January 19
João Gouveia, Departament of Mathematics, University of Coimbra
Positive semidefinite rank and representations of polytopes

February 9
Jane Ye, Department of Mathematics and Statistics
University of Victoria

On solving bilevel programming problems

February 23
Tristan van Leeuwen, Utrecht University
A penalty method for PDE-constrained optimization in inverse problems

March 8 [core]
Katya Scheinberg, Lehigh University
Using randomized models in black-box, derivative free and stochastic optimization

James Davis; Customer Choice Models and Assortment Optimization

Tuesday, January 12, 2016, 4:00 pm
SIG 226

James Davis , University of Illinois at Urbana-Champaign .
Customer Choice Models and Assortment Optimization

Abstract: In this talk we will focus on the assortment optimization problem. In this problem there is a set of products, each of which gives some fixed revenue when it is purchased by a customer. There is also a set of customers who’s purchasing behavior is influenced by what products they view. Our objective is to select some subset of products to display to customers in order to maximize expected revenue. The key component in this problem is customer purchasing behavior which we describe with a customer choice model. We will introduce a variety of customer choice models, each of which presents unique challenges. Under one of these models, the multinomial logit model, we will solve variants of the assortment optimization problem.