Category Archives: Spring 2016

Visiting Talks by Alexander Ioffe, Ting Kei Pong, and Asen Dontchev

Wednesday June 8, 2016 
Padelford C-401

11:00 – 11:45 am
Speaker:
 Ting Kei Pong (Hong Kong Polytechnic)

Title: Explicit estimation of KL exponent and linear convergence of 1st-order methods
Abstract: In this talk, we study the Kurdyka-Lojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we show that many convex or nonconvex optimization models that arise in applications such as sparse recovery have objectives whose KL exponent is 1/2: this indicates that various first-order methods are locally linearly convergent when applied to these models. Our results cover the sparse logistic regression problem and the least squares problem with SCAD or MCP regularization. We achieve this by relating the KL inequality with an error bound concept studied extensively by Luo and Tseng, and developing calculus rules for the KL exponent.
This is a joint work with Guoyin Li.

1:15 – 2:00 pm
Speaker:
Asen L. Dontchev (Mathematical Reviews)

Title: The Four Theorems of Lawrence Graves
Abstract: The classical inverse/implicit function theorems revolve around solving an equation in terms of a parameter and tell us when the solution mapping associated with this equation is a (differentiable) function with respect to the parameter. Already in 1927 Hildebrandt and Graves observed that one can put aside differentiability obtaining however that  the solution mapping is just Lipschitz continuous. The idea has evolved in subsequent extensions most known of which are various reincarnations of the Lyusternik-Graves theorem. In the last several decades it has been widely accepted that in order to derive estimates for the solution mapping, e.g.,  to put them in use  for proving convergence of algorithms, it is sufficient to differentiate what you can and leave the rest as is, hoping that the resulting problem is easier to handle. More sophisticated results may be obtained by employing various forms of metric regularity, starting from abstract results on mappings acting in metric spaces and ending with applications to numerical analysis. I will focus in particular on strong metric subregularity, a property which, put next to the [strong] metric regularity, turns out to be equally instrumental in applications.

2:00 – 2:45 pm
Speaker:
Alexander Ioffe (Technion)

Title: On variational inequalities over polyhedral sets
Abstract: The results on regular behavior of solutions to  variational inequalities over polyhedral sets proved in a series of papers by Robinson, Ralph and Dontchev-Rockafellar in the 90s has long become classics  of  variational  analysis. But the available  proofs, focused on the study of piecewise affine mappings and Lipschitz homeomorphisms and based essentially on matrix algebra and/or topology, are rather complicated and  practically do not use techniques of variational analysis. The only exception is the proof by Dontchev and Rockafellar of their “critical face” regularity criterion. I shall discuss a  different approach completely based on elementary polyhedral geometry and a few basic principles of metric regularity theory. It leads to new proofs, simpler and shorter, and in addition gives some clarifying geometric information.

Behçet Açıkmeşe; Convexification-Based Real-Time Optimization for High Performance Autonomous Control

May 31, 2016, 4pm
GUG 204
Behçet Açıkmeşe, Department of Aeronautics & Astronautics, University of Washington

Abstract: Many future engineering applications will require dramatic increases in our existing Autonomous Control capabilities. These include robotic sample return missions to planets, comets, and asteroids, formation flying spacecraft applications, applications utilizing swarms of autonomous agents, unmanned aerial, ground, and underwater vehicles, and autonomous commercial robotic applications. A key control challenge for many autonomous systems is to achieve the performance goals safely with minimal resource use in the presence of mission constraints and uncertainties. In principle these problems can be formulated and solved as optimization problems. The challenge is solving them reliably onboard the autonomous system in real time.

Our research has provided new analytical results that enabled the formulation of many autonomous control problems in a convex optimization framework, i.e., convexification of the control problem. The main mathematical theory used in achieving convexification is the duality theory of optimization. Duality theory manifests itself as Pontryagin’s Maximum Principle in infinite dimensional optimization problems and as KKT conditions in finite dimensional parameter optimization problems. Both theories were instrumental in our developments. Our analytical framework also allowed the computation of the precise bounds of performance for a control system in term of constrained controllability/reachability sets. This proved to be an important step in rigorous V&V of the resulting control decision making algorithms.

This presentation introduces several real-world aerospace engineering applications, where this approach either produced dramatically improved performance over the heritage technology or enabled a fundamentally new technology. A particularly important application is the fuel optimal control for planetary soft landing, whose complete solution has been an open problem since the Apollo Moon landings of 1960s. We developed a novel “lossless convexification” method, which enables the next generation planetary missions, such as Mars robotic sample return and manned missions. Another application is in Markov chain synthesis with “safety” constraints, which enabled the development of new decentralized coordination and control methods for spacecraft swarms.

Biographical Sketch: Behçet Açıkmeşe is a faculty in Department of Aeronautics and Astronautics at University of Washington, Seattle. He received his Ph.D. in Aerospace Engineering from Purdue University. Previously, he was a senior technologist at JPL and a lecturer at Caltech. At JPL, Dr. Açıkmeşe developed control algorithms for planetary landing, spacecraft formation flying, and asteroid and comet sample return missions. He was the developer of the “flyaway” control algorithms in Mars Science Laboratory (MSL) mission, which successfully landed on Mars in August 2012, and the reaction control system algorithms for NASA SMAP mission, which was launched in 2015. Dr. Açıkmeşe is a recipient of NSF CAREER Award, several NASA Group Achievement awards for his contributions to MSL and SMAP missions, and to Mars precision landing and formation flying technology development. He is an Associate Fellow of AIAA, a senior member of IEEE, and an associate editor of IEEE Control System Magazine and AIAA JGCD.

Nina Balcan; Distributed Machine Learning

CORE Series
Tuesday, May 24, 2016, 4:00 pm
Electrical Engineering Building (EEB) 125

Maria-Florina Balcan, Carnegie Mellon University

TITLE: Distributed Machine Learning

ABSTRACT: We consider the problem of learning from distributed data and analyze fundamental algorithmic and communication complexity questions involved. Broadly, we consider a framework where information is distributed between several locations, and our goal is to learn a low-error hypothesis with respect to the overall data by using as little communication, and as few rounds of communication, as possible. As an example, suppose k research groups around the world have collected large scientific datasets, such as genomic sequence data or sky survey data, and we wish to perform learning over the union of all these different datasets without too much communication.

In this talk, I will first discuss a general statistical or PAC style framework for analyzing communication complexity issues involved when doing distributed supervised machine learning, i.e., learning from annotated data distributed across multiple locations. I will discuss general lower bounds on the amount of communication needed to learn a given class and broadly-applicable techniques for achieving communication-efficient supervised learning.

I will also discuss algorithms with good communication complexity for unsupervised learning and dimensionality reduction problems, with interesting connections to efficient distributed coreset construction.

BIO: Maria-Florina Balcan is an Associate Professor in tNinaBalcanhe School of Computer Science at Carnegie Mellon University. Her main research interests are machine learning, computational aspects in economics and game theory, and algorithms. Her honors include the CMU SCS Distinguished Dissertation Award, an NSF CAREER Award, a Microsoft Fculty Research Fellowship, a Sloan Research Fellowship, and several paper awards. She was a Program Committee Co-chair for COLT 2014 and a board memberof the International Machine Learning Society, and is currently a Program Committee Co-chair for ICML 2016.

Gwen Spencer; On the Robust Hardness of Grobner Basis Computation

May 17, 2016, 4pm
GUG 204
Gwen Spencer, Smith College

Abstract:   It is well known that computing a Grobner Basis* for a general polynomial system is not possible in polynomial time (unless P=NP). What about if the Grobner Basis need not be computed exactly?  We explore several models of “Approximate Grobner Basis Computation” that allow an algorithm to selectively ignore some of the polynomials: the algorithm is only responsible for returning a Grobner Basis corresponding to the remaining polynomials. Reducing from a charismatic family of tight hardness-of-approximation results, we prove that these Approximate Grobner Basis Problems are still NP-Hard for lexicographic orders, even when the algorithm can ignore a substantial constant fraction of the polynomial system (our notions of “approximate” are parameterized). Our results hold even for very low-degree polynomial systems that are quite sparse, and even when the algorithm is allowed to choose the lexicographic order based on the input. A byproduct of this work is a novel view of the relationships between the members of a famous family of satisfiability results.

*Little prior exposure to Grobner Basis ideas will be assumed.

Rebecca Hoberg; A Polynomial-time LP Algorithm based on Multiplicative Weights

May 10, 2016, 4pm
GUG 204
Rebecca Hoberg, Department of Mathematics, UW

Abstract:  The multiplicative weights update method is a meta-algorithm with varied applications. As Arora, Hazan, and Kale show, applying this method with nonnegative weights on constraints yields an algorithm for linear programming. However, in polynomial time this approach only computes an approximate solution. In this talk, I will show how we can improve this to an exact polynomial-time algorithm. We do this by introducing a rescaling step similar to that used by Dunagan and Vempala for the perceptron algorithm. This is joint work with Thomas Rothvoss and Jon Swenson.

Alexander Razborov; Complexity of Semi-Algebraic and Algebraic Proofs

**Joint CSE and TOPS Talk**
May 13, 2016, 2:30pm

PCAR 290
Alexander Razborov, University of Chicago

Abstract: Semi-algebraic and algebraic proof systems make a very important part of the modern proof complexity. They explore the possibility of proving meaningful statements using algebra and geometry instead of or in addition to logic and manipulate with polynomial equations or inequalities rather than with logical formulas. Many of these systems are based on the powerful Nullstellensatz/Positivstellensatz theorems, while others are underlined by more ad hoc principles. This area is extremely well connected to other branches of theory of computing and mathematics, notably to approximation algorithms in combinatorial optimization, and it is growing fast.

The talk will consist of two parts. First, we will give a general overview of the area and explain some of the connections mentioned above. In the second, more technical, part we will talk about a recent paper on the width of semi-algebraic proofs in dynamical proof systems like Cutting Planes or Lovasz-Schrijver.

Sébastien Bubeck; New Results at the Crossroads of Convexity, Learning and Information Theory

CORE Series
Tuesday, Apr 19, 2016, 4pm

LOW 102
Sébastien Bubeck, Microsoft Research

ABSTRACT: I will present three new results: (i) the Cramer transform of
the uniform measure on a convex body is a universal self-concordant
barrier; (ii) projected gradient descent with Gaussian noise allows to
sample from a log-concave measure in polynomial time; and (iii) Thompson
sampling combined with a multi-scale exploration solves the Bayesian
convex bandit problem. The unifying theme in these results is the
interplay between concepts from convex geometry, learning and
information theory.

bubeckBIO: Sébastien Bubeck is a Researcher at Microsoft Research (Theory Group) in Redmond, WA. Prior to Microsoft Research, he was an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. He received his MS in Mathematics from the Ecole Normale Supérieure de Chachan and his PhD from the University of Lille 1, France, where his PhD thesis won several prizes including the Jacques Neveu prize for the best French PhD thesis in Probability and Statistics. He was awarded the Sloan Research Fellowship in Computer Science in 2015, and a Best Paper Award at COLT (Conference on Learning Theory) in 2009. He was the general co-chair for COLT 2013 and 2014, and serves on the steering committee for COLT. He is also the author of the recent monograph, Convex Optimization: Algorithms and Complexity, published in 2015 as a part of Foundations and Trends in Machine Learning.