Jon Lee; Mixed-Integer Nonlinear Optimization: Let’s get crazy!

CORE Series
Tuesday, Jan 31, 2017
4:00PM-5:00PM 
Jon Lee, University of Michigan

TITLE: Mixed-Integer Nonlinear Optimization: Let’s get crazy!

ABSTRACT: Mixed-Integer Nonlinear Optimization (MINLP) is the mother of all (deterministic) optimization problems. As such, it is too broad a category to do something for in general. Still, there are broad classes where we have positive complexity results, as well as narrow classes where we have negative complexity results. I will sample some of the landscape.

Considering very damning complexity results, we should have modest goals on the computational side. Nonetheless, there has been some substantial success with “general-purpose” algorithms and software for MINLP, applicable to rather broad (and practically relevant) formulations. In particular, for formulations with convex relaxations we have Bonmin (which implements NLP-based branch-and-bound and outer approximation), and for “factorable” formulations with non-convex relaxations we have Baron, Scip, Antigone, and Couenne (which implement spatial branch-and-bound). Still, the situation is not that we have extremely reliable nor scalable technology (in sharp contrast with LP and in less sharp contrast to MILP). Much of the research effort in modeling and algorithm improvement relates to finding tight convexifications. I will present recent mathematical efforts that I have been involved in to make algorithmic approaches to MILP more efficient and robust. In doing so, many areas of mathematics surprisingly pop up. Substantial parts of this are joint works with Daphne Skipper (U.S. Naval Academy) and with Emily Speakman (U. Michigan).

jonleeBIO:  Jon Lee is the G. Lawton and Louise G. Johnson Professor of Engineering at the University of Michigan. He received his Ph.D. from Cornell University. Jon has previously been a faculty member at Yale University and the University of Kentucky, and an adjunct professor at New York University.  He was a Research Staff member at the IBM T.J. Watson Research Center, where he managed the mathematical programming group. Jon is author of ~120 papers, the text “A First Course in Combinatorial Optimization” (Cambridge University Press), and the open-source book “A First Course in Linear Optimization” (Reex Press). He was the founding Managing Editor of the journal Discrete Optimization (2004-06), he is currently co-Editor of the journal Mathematical Programming, and he is on the editorial boards of the journals Optimization and Engineering, and Discrete Applied Mathematics. Jon was Chair of the Executive Committee of the Mathematical Optimization Society (2008-10), and Chair of the INFORMS Optimization Society (2010-12). He was awarded the INFORMS Computing Society Prize (2010), and he is a Fellow of INFORMS (since 2013).

Amy Wiebe; On representing the positive semidefinite cone using the second order cone

Jan 17, 2017, 4pm
PDL C-401
Amy Wiebe, Department of Mathematics, University of Washington

Abstract:  In this talk we present a recent result of Hamza Fawzi which answers the question of whether it is possible to express the general positive semidefinite cone using second-order cones. The result shows that the 3 x 3 positive semidefinite cone S^3_+ does not admit a second-order cone representation. The proof relies on the method of Gouveia, Parrilo and Thomas which shows that existence of cone lifts of convex sets is equivalent to a certain factorization of the slack operator of the set. We explain how this framework is used in the paper to show that S^3_+ has no finite second order cone lift.

Asen Dontchev; Strong Metric Subregularity

Feb 7, 2017, 4pm
THO 234
Asen Dontchev, Mathematical Reviews and University of Michigan

Abstract:  Although the property of strong metric sub regularity of set-valued mappings has been present in the literature under various names and with various (equivalent) definitions for more than two decades, it has attracted much less attention than its older “siblings”, the metric regularity and the strong metric regularity. In this talk I will try to show that the strong metric subregularity shares the main features of these two most popular regularity properties and is not less instrumental in applications. In particular, it obeys the inverse function theorem paradigm also for nonsmooth perturbations.

A sufficient condition  for strong metric subregularity is established in terms of surjectivity of the Fr\’echet coderivative will be presented, and it will be shown by a counterexample that subjectivity of the limiting coderivative is not a sufficient condition for this property, in general. Then various versions of Newton’s method for solving variational inequalities will be considered including  inexact and semismooth methods. A characterization of the strong metric subregularity of the KKT mapping will be demonstrated, as well as a radius theorem for the optimality mapping of a nonlinear programming problem. Finally, an error estimate is derived for a discrete approximation in optimal control under strong metric subregularity of the mapping involved in the Pontryagin principle.

David Shmoys; Models and Algorithms for the Operation and Design of Bike-Sharing Systems

CORE Series (Joint with CSE Theory Seminar)
**Thursday**, Dec 1, 2016
11:00AM-12:00PM in CSE 403
David Shmoys, Cornell University

TITLE: Models and Algorithms for the Operation and Design of Bike-Sharing Systems

ABSTRACT: New York launched the largest bike-sharing system in North America in May 2013. We have worked with Citibike, using analytics and optimization to change how they manage the system. Huge rush-hour usage imbalances the system; we answer both of the questions: where should bikes be at the start of a day and how can we mitigate the imbalances that develop? We will survey the analytics we have employed for the former question, where we developed an approach based on continuous-time Markov chains combined with integer programming models to compute daily stocking levels for the bikes, as well as methods employed for optimizing the capacity of the stations. For the question of mitigating the imbalances that result, we will describe both heuristic methods and approximation algorithms that guide both mid-rush hour and overnight rebalancing, as well as for the positioning of corrals, which have been one of the most effective means of creating adaptive capacity in the system.

This is joint work with Daniel Freund, Shane Henderson, Nanjing Jian, Ashkan Nourozi-Fard, Eoin O’Mahoney, and Alice Paul.

 

David B. Shmoys pic

BIO: David Shmoys is the Laibe/Acheson Professor at Cornell University in the School of Operations Research and Information Engineering, and also the Department of Computer Science at Cornell University, and is currently the Director of the School of Operations Research and Information Engineering. Shmoys’s research has focused on the design and analysis of efficient algorithms for discrete optimization problems, with applications including scheduling, inventory theory, computational biology, and most recently, on stochastic optimization models and algorithms in computational sustainability. His graduate-level text, The Design of Approximation Algorithms, co-authored with David Williamson, was awarded the 2013 INFORMS Lanchester Prize. He is an INFORMS Fellow, a Fellow of the ACM, a SIAM Fellow, and was an NSF Presidential Young Investigator; he has served on numerous editorial boards, and is currently Editor-in-Chief of Research in the Mathematical Sciences (for theoretical computer science) and an Associate Editor of Mathematics of Operations Research.

Save

Jonathan Jonker; Singular Kalman Smoothing

Nov 22, 2016, 4pm
SAV 131
Jonathan Jonker, Department of Mathematics, University of Washington

Abstract: The Kalman filter is a process of estimating unknown variables using a sequence of known measurements. It is widely used in fields such as navigation, tracking and economics.  While this is not classically an optimization problem, it has been shown that by using a maximum a posteriori estimate it can be reformulated to be a least squares minimization problem.  This reformulation assumes that certain covariance matrices are nonsingular; we show that this assumption may be dropped.  Once the optimization problem is derived, we discuss solution methods and provide numerical examples.

Joint work with Sasha Aravkin.