# Autumn 2016 Calendar

Oct 11 [TOPS]
Harishchandra Ramadas, Department of Mathematics, University of Washington
A Deterministic Algorithm for Discrepancy Minimization

Nov 1 [TOPS]
Reza Eghbali, Department of Electrical Engineering, University of Washington
Worst Case Competitive Analysis of Online Algorithms for Conic Optimization

Nov 7 [EE Lytle Lecture – Broad Audience]
David Donoho, Department of Statistics, Stanford University
See details here.

Nov 8 [EE Lytle Lecture – Colloquium]
David Donoho, Department of Statistics, Stanford University
See details here.

Nov 15 [TOPS]
Rainer SinnSchool of Mathematics, Georgia Institute of Technology

Nov 22 [TOPS]
Jonathan Jonker, Department of Mathematics, University of Washington

Nov 29 [TOPS]
Yin-Tat Lee, Microsoft Research
Geodesic Walks

Dec 1 [CORE / CSE Theory Seminar]
David Shmoys, Department of Computer Science, Cornell University

Dec 2 [Math Across Campus – Broad Audience]
David Shmoys, Department of Computer Science, Cornell University
See details here.

# Future Talks

The talks below are tentatively scheduled for future quarters.

Jan 31 [CORE]
Jon Lee, Department of Industrial and Operations Engineering, University of Michigan

Apr 11 [CORE]
Liza Levina, Department of Statistics, University of Michigan

# Yin-Tat Lee; Geodesic Walks

Nov 29, 2016, 4pm
SAV 131
Yin-Tat Lee, Microsoft Research

Abstract: We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from polytopes, a problem that is even more general than linear programming. The walk is a discrete-time simulation of a stochastic differential equation (SDE) on the Riemannian manifold. The resulting sampling algorithm for polytopes mixes in O*(mn^{3/4}) steps for a polytope in R^n specified by m inequalities. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes.

# Harishchandra Ramadas; A Deterministic Algorithm for Discrepancy Minimization

Oct 11, 2016, 4pm
SAV 131
Harishchandra Ramadas, Department of Mathematics, University of Washington

Given a collection of n items and n subsets of this collection, we want to color each object either red or blue so that in every subset, the number of red items is close to the number of blue items. In 1985, Spencer showed that there exists a coloring so that the “discrepancy” of each subset — the difference between the number of red and blue items — is at most sqrt(n). We present a deterministic algorithm based on the multiplicative weights update method to find such a coloring in polynomial time.

Based on joint work with Avi Levy and Thomas Rothvoss. No background will be assumed.

# Reza Eghbali; Worst Case Competitive Analysis of Online Algorithms for Conic Optimization

Nov 1, 2016, 4pm
SAV 131
Reza Eghbali, Department of Electrical Engineering, University of Washington

Abstract:  Online optimization covers problems such as online resource allocation, online bipartite matching, adwords (a central problem in e-commerce and advertising), and adwords with separable concave returns. We analyze the worst case competitive ratio of two primal-dual algorithms for a class of online convex (conic) optimization problems that contains the previous examples as special cases defined on the positive orthant. We derive a sufficient condition on the objective function that guarantees a constant worst case competitive ratio (greater than or equal to $\frac{1}{2}$) for monotone objective functions. Using the same framework, we also derive the competitive ratio for problems with objective functions that are not monotone. We provide new examples of online problems on the positive orthant and the positive semidefinite cone that satisfy the sufficient condition. We show how smoothing can improve the competitive ratio of these algorithms, and in particular for separable functions, we show that the optimal smoothing can be derived by solving a convex optimization problem. This result allows us to directly optimize the competitive ratio bound over a class of smoothing functions, and hence design effective smoothing customized for a given cost function.

# Visiting Talks by Alexander Ioffe, Ting Kei Pong, and Asen Dontchev

Wednesday June 8, 2016

11:00 – 11:45 am
Speaker:
Ting Kei Pong (Hong Kong Polytechnic)

Title: Explicit estimation of KL exponent and linear convergence of 1st-order methods
Abstract: In this talk, we study the Kurdyka-Lojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we show that many convex or nonconvex optimization models that arise in applications such as sparse recovery have objectives whose KL exponent is 1/2: this indicates that various first-order methods are locally linearly convergent when applied to these models. Our results cover the sparse logistic regression problem and the least squares problem with SCAD or MCP regularization. We achieve this by relating the KL inequality with an error bound concept studied extensively by Luo and Tseng, and developing calculus rules for the KL exponent.
This is a joint work with Guoyin Li.

1:15 – 2:00 pm
Speaker:
Asen L. Dontchev (Mathematical Reviews)

Title: The Four Theorems of Lawrence Graves
Abstract: The classical inverse/implicit function theorems revolve around solving an equation in terms of a parameter and tell us when the solution mapping associated with this equation is a (differentiable) function with respect to the parameter. Already in 1927 Hildebrandt and Graves observed that one can put aside differentiability obtaining however that  the solution mapping is just Lipschitz continuous. The idea has evolved in subsequent extensions most known of which are various reincarnations of the Lyusternik-Graves theorem. In the last several decades it has been widely accepted that in order to derive estimates for the solution mapping, e.g.,  to put them in use  for proving convergence of algorithms, it is sufficient to differentiate what you can and leave the rest as is, hoping that the resulting problem is easier to handle. More sophisticated results may be obtained by employing various forms of metric regularity, starting from abstract results on mappings acting in metric spaces and ending with applications to numerical analysis. I will focus in particular on strong metric subregularity, a property which, put next to the [strong] metric regularity, turns out to be equally instrumental in applications.

2:00 – 2:45 pm
Speaker:
Alexander Ioffe (Technion)

Title: On variational inequalities over polyhedral sets
Abstract: The results on regular behavior of solutions to  variational inequalities over polyhedral sets proved in a series of papers by Robinson, Ralph and Dontchev-Rockafellar in the 90s has long become classics  of  variational  analysis. But the available  proofs, focused on the study of piecewise affine mappings and Lipschitz homeomorphisms and based essentially on matrix algebra and/or topology, are rather complicated and  practically do not use techniques of variational analysis. The only exception is the proof by Dontchev and Rockafellar of their “critical face” regularity criterion. I shall discuss a  different approach completely based on elementary polyhedral geometry and a few basic principles of metric regularity theory. It leads to new proofs, simpler and shorter, and in addition gives some clarifying geometric information.

# Behçet Açıkmeşe; Convexification-Based Real-Time Optimization for High Performance Autonomous Control

May 31, 2016, 4pm
GUG 204
Behçet Açıkmeşe, Department of Aeronautics & Astronautics, University of Washington

Abstract: Many future engineering applications will require dramatic increases in our existing Autonomous Control capabilities. These include robotic sample return missions to planets, comets, and asteroids, formation flying spacecraft applications, applications utilizing swarms of autonomous agents, unmanned aerial, ground, and underwater vehicles, and autonomous commercial robotic applications. A key control challenge for many autonomous systems is to achieve the performance goals safely with minimal resource use in the presence of mission constraints and uncertainties. In principle these problems can be formulated and solved as optimization problems. The challenge is solving them reliably onboard the autonomous system in real time.

Our research has provided new analytical results that enabled the formulation of many autonomous control problems in a convex optimization framework, i.e., convexification of the control problem. The main mathematical theory used in achieving convexification is the duality theory of optimization. Duality theory manifests itself as Pontryagin’s Maximum Principle in infinite dimensional optimization problems and as KKT conditions in finite dimensional parameter optimization problems. Both theories were instrumental in our developments. Our analytical framework also allowed the computation of the precise bounds of performance for a control system in term of constrained controllability/reachability sets. This proved to be an important step in rigorous V&V of the resulting control decision making algorithms.

This presentation introduces several real-world aerospace engineering applications, where this approach either produced dramatically improved performance over the heritage technology or enabled a fundamentally new technology. A particularly important application is the fuel optimal control for planetary soft landing, whose complete solution has been an open problem since the Apollo Moon landings of 1960s. We developed a novel “lossless convexification” method, which enables the next generation planetary missions, such as Mars robotic sample return and manned missions. Another application is in Markov chain synthesis with “safety” constraints, which enabled the development of new decentralized coordination and control methods for spacecraft swarms.

Biographical Sketch: Behçet Açıkmeşe is a faculty in Department of Aeronautics and Astronautics at University of Washington, Seattle. He received his Ph.D. in Aerospace Engineering from Purdue University. Previously, he was a senior technologist at JPL and a lecturer at Caltech. At JPL, Dr. Açıkmeşe developed control algorithms for planetary landing, spacecraft formation flying, and asteroid and comet sample return missions. He was the developer of the “flyaway” control algorithms in Mars Science Laboratory (MSL) mission, which successfully landed on Mars in August 2012, and the reaction control system algorithms for NASA SMAP mission, which was launched in 2015. Dr. Açıkmeşe is a recipient of NSF CAREER Award, several NASA Group Achievement awards for his contributions to MSL and SMAP missions, and to Mars precision landing and formation flying technology development. He is an Associate Fellow of AIAA, a senior member of IEEE, and an associate editor of IEEE Control System Magazine and AIAA JGCD.