# Madeleine Udell; Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage

Jun 1, 2017, 3:30pm
Location TBA
Madeleine Udell, Department of Operations Research and Information Engineering, Cornell University

Abstract: In this talk, we consider a fundamental class of convex matrix optimization problems with low-rank solutions. We show it is possible to solve these problem using far less memory than the natural size of the decision variable when the problem data has a concise representation. Our proposed method, SketchyCGM, is the first algorithm to offer provable convergence to an optimal point with an optimal memory footprint. SketchyCGM modifies a standard convex optimization method — the conditional gradient method — to work on a sketched version of the decision variable, and can recover the solution from this sketch. In contrast to recent work on non-convex methods for this problem class, SketchyCGM is a convex method; and our convergence guarantees do not rely on statistical assumptions.

# Liza Levina; Interpretable Prediction Models for Network-Linked Data

CORE Series
Tuesday, April 11, 2017
EEB 125, 4:00-5:00PM
Liza LevinaUniversity of Michigan

TITLE: Interpretable Prediction Models for Network-Linked Data

ABSTRACT: Prediction problems typically assume the training data are independent samples, but in many modern applications samples come from individuals connected by a network. For example, in adolescent health studies of risk-taking behaviors, information on the subjects’ social networks is often available and plays an important role through network cohesion, the empirically observed phenomenon of friends behaving similarly. Taking cohesion into account should allow us to improve prediction. Here we propose a regression-based framework with a network penalty on individual node effects to encourage similarity between predictions for linked nodes, and show that it outperforms traditional models both theoretically and empirically when network cohesion is present. The framework is easily extended to other models, such as the generalized linear model and Cox’s proportional hazard model. Applications to predicting teenagers’ behavior based on both demographic covariates and their friendship networks from the AddHealth data are discussed in detail.

BIO: Liza Levina received her PhD in Statistics from UC Berkeley in 2002 and joined the University of Michigan the same year.  Her research interestsinclude networks, high-dimensional data, and sparsity.  She has worked on estimating large covariance matrices,
graphical models, and other topics in inference for high-
dimensional data.   She also works on statistical inference for network data, including problems of community detectiona
nd link prediction.   Her research covers methodology, theory, and applications, especially to spectroscopy, remote sensing and, in the past, computer vision. She received the junior Noether Award from the ASA in 2010 and was elected a member of ISI in 2011.

# Abe Engle; Local Convergence Rates of a Gauss-Newton Method for Convex Compositions

Mar 14, 2017, 4pm
PDL C-401
Abe Engle, Department of Mathematics, University of Washington

Abstract: We discuss a Gauss-Newton methodology for minimizing convex compositions of smooth functions. We analyze current local rates of quadratic convergence when the subproblems are exactly solved and propose inexact methods that relax current sharpness assumptions while maintaining speeds of convergence.

# Rebecca Hoberg; Number balancing is as hard as Minkowski’s theorem

Feb 21, 2017, 4pm
PDL C-401
Rebecca Hoberg, Department of Mathematics, University of Washington

Abstract: The number balancing problem (NBP) is the following: given real numbers $a_1,…,a_n \in [0,1]$, find two distinct subsets of $[n]$ so that the difference of the sums is minimized. An application of the pigeonhole principle shows that there is always a solution where the difference is at most $O(\frac{\sqrt{n}}{2^n})$. Finding the minimum, however, is NP-hard. In polynomial time, the differencing algorithm by Karmarkar and Karp from 1982 can produce a solution with difference at most $n^{-\Theta(\log n)}$, but no further improvement has been made since then.

In this talk, we show that an approximate oracle for Minkowski’s Theorem gives an approximate NBP oracle, and conversely an approximate NBP oracle gives an approximate Minkowski oracle. In particular, we prove that any polynomial time algorithm that guarantees a NBP solution of difference at most $O(\frac{2^\sqrt{n}}{2^n})$ would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem. This is joint work with Harish Ramadas, Thomas Rothvoss, and Xin Yang.

# Lynn Chua; Gram spectrahedra

Feb 14, 2017, 4pm
THO 234
Lynn Chua, Department of Computer Science, UC Berkeley

Abstract: Representations of nonnegative polynomials as sums of squares are central to real algebraic geometry and the subject of active research. The sum-of-squares representations of a given polynomial are parametrized by the convex body of positive semidefinite Gram matrices, called the Gram spectrahedron. This is a fundamental object in polynomial optimization and convex algebraic geometry. We summarize results on sums of squares that fit naturally into the context of Gram spectrahedra, present some new results, and highlight related open questions. We discuss sum-of-squares representations of minimal length and relate them to Hermitian Gram spectrahedra and point evaluations on toric varieties.

# Jon Lee; Mixed-Integer Nonlinear Optimization: Let’s get crazy!

CORE Series
Tuesday, Jan 31, 2017
LOW 105, 4:00-5:00PM
Jon Lee, University of Michigan

TITLE: Mixed-Integer Nonlinear Optimization: Let’s get crazy!

ABSTRACT: Mixed-Integer Nonlinear Optimization (MINLP) is the mother of all (deterministic) optimization problems. As such, it is too broad a category to do something for in general. Still, there are broad classes where we have positive complexity results, as well as narrow classes where we have negative complexity results. I will sample some of the landscape.

Considering very damning complexity results, we should have modest goals on the computational side. Nonetheless, there has been some substantial success with “general-purpose” algorithms and software for MINLP, applicable to rather broad (and practically relevant) formulations. In particular, for formulations with convex relaxations we have Bonmin (which implements NLP-based branch-and-bound and outer approximation), and for “factorable” formulations with non-convex relaxations we have Baron, Scip, Antigone, and Couenne (which implement spatial branch-and-bound). Still, the situation is not that we have extremely reliable nor scalable technology (in sharp contrast with LP and in less sharp contrast to MILP). Much of the research effort in modeling and algorithm improvement relates to finding tight convexifications. I will present recent mathematical efforts that I have been involved in to make algorithmic approaches to MILP more efficient and robust. In doing so, many areas of mathematics surprisingly pop up. Substantial parts of this are joint works with Daphne Skipper (U.S. Naval Academy) and with Emily Speakman (U. Michigan).

BIO:  Jon Lee is the G. Lawton and Louise G. Johnson Professor of Engineering at the University of Michigan. He received his Ph.D. from Cornell University. Jon has previously been a faculty member at Yale University and the University of Kentucky, and an adjunct professor at New York University.  He was a Research Staff member at the IBM T.J. Watson Research Center, where he managed the mathematical programming group. Jon is author of ~120 papers, the text “A First Course in Combinatorial Optimization” (Cambridge University Press), and the open-source book “A First Course in Linear Optimization” (Reex Press). He was the founding Managing Editor of the journal Discrete Optimization (2004-06), he is currently co-Editor of the journal Mathematical Programming, and he is on the editorial boards of the journals Optimization and Engineering, and Discrete Applied Mathematics. Jon was Chair of the Executive Committee of the Mathematical Optimization Society (2008-10), and Chair of the INFORMS Optimization Society (2010-12). He was awarded the INFORMS Computing Society Prize (2010), and he is a Fellow of INFORMS (since 2013).

# Amy Wiebe; On representing the positive semidefinite cone using the second order cone

Jan 17, 2017, 4pm
PDL C-401
Amy Wiebe, Department of Mathematics, University of Washington

Abstract:  In this talk we present a recent result of Hamza Fawzi which answers the question of whether it is possible to express the general positive semidefinite cone using second-order cones. The result shows that the 3 x 3 positive semidefinite cone S^3_+ does not admit a second-order cone representation. The proof relies on the method of Gouveia, Parrilo and Thomas which shows that existence of cone lifts of convex sets is equivalent to a certain factorization of the slack operator of the set. We explain how this framework is used in the paper to show that S^3_+ has no finite second order cone lift.