Spring 2016 Calendar

April 19 [CORE]
Sébastien Bubeck, Theory Group, Microsoft Research
New Results at the Crossroads of Convexity, Learning and Information Theory

April 26,
Jesus De Loera, Department of Mathematics, UC Davis
Chance-Constrained Convex Mixed-Integer Programs and Beyond

May 10,
Rebecca Hoberg, Department of Mathematics, University of Washington
A Polynomial-Time LP Algorithm Based on Multiplicative Weights

May 13 [Joint CSE and TOPS, 2:30 pm, PCAR 290]
Alexander Razborov, Department of Computer Science, University of Chicago
Complexity of Semi-Algebraic and Algebraic Proofs

May 13 -15

May 17
Gwen Spencer, Department of Mathematics and Statistics, Smith College

May 24,
Behcet Acikmese, Department of Aeronautics & Astronautics, University of Washington

May 31 [CORE]
Nina Balcan, Carnegie Mellon University

Winter 2016 Calendar

January 12
James Davis, University of Illinois at Urbana-Champaign
Customer Choice Models and Assortment Optimization

January 19
João Gouveia, Departament of Mathematics, University of Coimbra
Positive semidefinite rank and representations of polytopes

February 9
Jane Ye, Department of Mathematics and Statistics
University of Victoria

On solving bilevel programming problems

February 23
Tristan van Leeuwen, Utrecht University
A penalty method for PDE-constrained optimization in inverse problems

March 8 [core]
Katya Scheinberg, Lehigh University
Using randomized models in black-box, derivative free and stochastic optimization

Fall 2015 Calendar

October 13 [CORE]
Michael Overton, Courant Institute of Mathematical Sciences, New York University
Non­smooth, Non­con­vex Opti­miza­tion: Algo­rithms and Examples

October 20
Sasha Aravkin, Department of Applied Mathematics, University of Washington
An Interior Point Approach for Data Science

October 27
Damek Davis, Department of Mathematics, University of California, Los Angeles
A Three-Operator Splitting Scheme and its Optimization Applications

November 10
James Saunderson, Department of Electrical Engineering, University of Washington
Semidefinite descriptions of regular polygons (and beyond)

November 24 [CORE]
Pedro Domingos, Department of Computer Science and Engineering, University of Washington
Nonsmooth, Nonconvex Optimization: Algorithms and Examples

December 8
Victor Falgas-Ravry, Department of Mathematics, Vanderbilt University
Finding large full subgraphs

Rebecca Hoberg; A Polynomial-time LP Algorithm based on Multiplicative Weights

May 10, 2016, 4pm
GUG 204
Rebecca Hoberg, Department of Mathematics, UW

Abstract:  The multiplicative weights update method is a meta-algorithm with varied applications. As Arora, Hazan, and Kale show, applying this method with nonnegative weights on constraints yields an algorithm for linear programming. However, in polynomial time this approach only computes an approximate solution. In this talk, I will show how we can improve this to an exact polynomial-time algorithm. We do this by introducing a rescaling step similar to that used by Dunagan and Vempala for the perceptron algorithm. This is joint work with Thomas Rothvoss and Jon Swenson.

Alexander Razborov; Complexity of Semi-Algebraic and Algebraic Proofs

**Joint CSE and TOPS Talk**
May 13, 2016, 2:30pm

PCAR 290
Alexander Razborov, University of Chicago

Abstract: Semi-algebraic and algebraic proof systems make a very important part of the modern proof complexity. They explore the possibility of proving meaningful statements using algebra and geometry instead of or in addition to logic and manipulate with polynomial equations or inequalities rather than with logical formulas. Many of these systems are based on the powerful Nullstellensatz/Positivstellensatz theorems, while others are underlined by more ad hoc principles. This area is extremely well connected to other branches of theory of computing and mathematics, notably to approximation algorithms in combinatorial optimization, and it is growing fast.

The talk will consist of two parts. First, we will give a general overview of the area and explain some of the connections mentioned above. In the second, more technical, part we will talk about a recent paper on the width of semi-algebraic proofs in dynamical proof systems like Cutting Planes or Lovasz-Schrijver.

Sébastien Bubeck; New Results at the Crossroads of Convexity, Learning and Information Theory

CORE Series
Tuesday, Apr 19, 2016, 4pm

LOW 102
Sébastien Bubeck, Microsoft Research

ABSTRACT: I will present three new results: (i) the Cramer transform of
the uniform measure on a convex body is a universal self-concordant
barrier; (ii) projected gradient descent with Gaussian noise allows to
sample from a log-concave measure in polynomial time; and (iii) Thompson
sampling combined with a multi-scale exploration solves the Bayesian
convex bandit problem. The unifying theme in these results is the
interplay between concepts from convex geometry, learning and
information theory.

bubeckBIO: Sébastien Bubeck is a Researcher at Microsoft Research (Theory Group) in Redmond, WA. Prior to Microsoft Research, he was an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. He received his MS in Mathematics from the Ecole Normale Supérieure de Chachan and his PhD from the University of Lille 1, France, where his PhD thesis won several prizes including the Jacques Neveu prize for the best French PhD thesis in Probability and Statistics. He was awarded the Sloan Research Fellowship in Computer Science in 2015, and a Best Paper Award at COLT (Conference on Learning Theory) in 2009. He was the general co-chair for COLT 2013 and 2014, and serves on the steering committee for COLT. He is also the author of the recent monograph, Convex Optimization: Algorithms and Complexity, published in 2015 as a part of Foundations and Trends in Machine Learning.

Jesus De Loera; Chance-Constrained Convex Mixed-Integer Programs and Beyond.

Apr 26, 2016, 4pm
GUG 204
Jesus De Loera, UC Davis

Abstract:  Chance-constrained convex mixed integer optimization, a branch of stochastic optimization,  concerns convex mixed-integer programming problems in which constraints are imprecisely known but the problems need to be solved with a minimum probability of reliability or certainty.  Such problems arise quite naturally in many areas of finance (e.g., portfolio planning where losses should not exceed some risk threshold), telecommunication (services agreements where contracts require network providers to guarantee with high probability that packet losses will not exceed a certain percentage), and facility location (for medical emergency response stations, while requiring high probability of coverage overall possible emergency scenarios).

Such problems are notoriously difficult to solve, because the feasible region is often not convex and the exact probabilities can be hard to compute exactly. We propose a  sampling approach that overcomes those obstacles. We generalized a 2006 sampling algorithm of Calafiore-Campi’s  for continuous variables that reduces the problem to solving deterministic convex mixed integer programming.  A new generalization of Helly’s theorem is necessary to rigorously prove the sampling algorithms work as planned.

If time allows this talk will take a pick on the future of $S$-optimization theory, a powerful generalization of the traditional continuous, integer, and mixed-integer optimization, where now the variables take values in special proper subsets of $\R^d$. All new theorems are joint work with R. La Haye, D. Oliveros, E. Roldan-Pensado.

Katya Scheinberg; Using randomized models in black-box, derivative free and stochastic optimization

CORE Series
Tuesday, March 8, 2016, 4:00 pm
SMI 304
Katya Scheinberg,Lehigh University.
Using randomized models in black-box, derivative free and stochastic optimization

ABSTRACT: Derivative free optimization (DFO) is the field that addresses optimization of black-box functions – that is functions whose value can be computed (possibly approximately) but whose derivatives cannot be approximated directly. The applications of DFO range from aircraft engineering to hyperparameter tuning in machine learning. All derivative free methods rely on sampling the objective function at one or more points at each iteration. Constructing and maintaining these sample sets has been one of the most essential issues in DFO. Majority of the existing results rely on deterministic sampling techniques.
We will discuss the new developments for using randomized sampled sets within the DFO framework. Randomized sample sets have many advantages over the deterministic sets. In particular, it is often easier to enforce “good” properties of the models with high probability, rather than in the worst case. In addition, randomized sample sets can help automatically discover a good local low dimensional approximation to the high dimensional objective function. We will demonstrate how compressed sensing results can be used to show that reduced size random sample sets can provide full second order information under the assumption of the sparsity of the Hessian.

Bio: Katya Scheinberg is the Harvey E. Wagner Endowed Chair Professor at the Industrial and Systems Engineering Department at Lehigh University. Prior to her current position, Katya was a staff member at the IBM T.J. Watson Research Center for over a decade. Her main research interests lie broadly in continuous optimization, focusing on convex optimization, derivative free optimization, and large-scale methods for Big Data and Machine Learning applications. Katya is currently the Editor-in-Chief of the SIAM-MOS Series on Optimization and an associate editor of the SIAM Journal on Optimization. She is a recent receipt of the prestigious Lagrange Prize, along with Andrew R. Conn and Luis Nunes Vicente, for their highly influential book “Introduction to Derivative-Free Optimization”. Katya’s research is supported by grants from AFOSR, DARPA, NSF, and Yahoo.

Tristan van Leeuwen; A penalty method for PDE-constrained optimization in inverse problems

Feb 23, 2016, 4pm
SIG 226
Tristan van Leeuwen,, Utrecht University

Abstract: Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal, then, is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the paramaters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method reduces some of the non-linearity of the problem and is less sensitive the initial iterate.