Category Archives: Fall 2015

Victor Falgas-Ravry; Finding large full subgraphs

Tuesday, December 8, 2015, 4:00 pm
GUG 204

Victor Falgas-Ravry, University of Washington .
Finding large full subgraphs

Abstract: A graph is a pair G=(V,E), where V is a set of vertices and E is a set of pairs from V forming the edges of G. Suppose G is a graph on n vertices in which a proportion p of all possible edges are present, for some fixed p: 0 \leq p \leq 1. We call a subgraph H of G on m vertices \emph{full} if every vertex in H is connected by an edge to at least p(m-1) other vertices in H. In other words, a subgraph H is full if its minimum degree is at least the expected average degree for an m-vertex subgraph. If we think of the graph G as representing a network, then its full subgraphs correspond to `cliquey’ communities within that network.

In this talk I shall consider the question of how large a full subgraph I can be guaranteed to find. Improving on earlier work of Erd\H{o}s, \L uczak and Spencer, I shall give a simple algorithm for finding full subgraphs of order at least n^{2/3}, and show that this order is best possible for infinitely many values of p.

Joint work with Klas Markstr\”om and Jacques Verstra\”ete.

Pedro Domingos; Recursive Decomposition for Nonconvex Optimization

CORE Series
Tuesday, November 24, 2015, 4:00 pm
Gowen Hall, Room 201
Pedro Domingos,University of Washington.
Recursive Decomposition for Nonconvex Optimization

ABSTRACT: Most real-world continuous optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be brought to bear. Based on this, we propose a problem-decomposition approach to nonconvex optimization. Similarly to DPLL-style SAT solvers and recursive conditioning in probabilistic inference, our algorithm, RDIS, recursively sets variables so as to simplify and decompose the objective function into approximately independent subfunctions, until the remaining functions are simple enough to be optimized by standard techniques like gradient descent. The variables to set are chosen by graph partitioning, ensuring decomposition whenever possible. We show analytically that RDIS can solve a broad class of nonconvex optimization problems exponentially faster than gradient descent with random restarts. Experimentally, RDIS outperforms standard techniques on problems like structure from motion and protein folding.

Bio: Pedro Domingos is a professor of computer science at the University of Washington and the author of “The Master Algorithm”. He is a winner of the SIGKDD Innovation Award, the highest honor in data science. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and has received a Fulbright Scholarship, a Sloan Fellowship, the National Science Foundation’s CAREER Award, and numerous best paper awards. He received his Ph.D. from the University of California at Irvine and is the author or co-author of over 200 technical publications. He has held visiting positions at Stanford, Carnegie Mellon, and MIT. He co-founded the International Machine Learning Society in 2001. His research spans a wide variety of topics in machine learning, artificial intelligence, and data science, including scaling learning algorithms to big data, maximizing word of mouth in social networks, unifying logic and probability, and deep learning.

Semidefinite descriptions of regular polygons (and beyond), James Saunderson

Tuesday, November 10, 2015, 4:00 pm
GUG 204

James Saunderson, University of Washington .
Semidefinite descriptions of regular polygons (and beyond)

Abstract: This talk is focused on describing certain polytopes as the feasible regions of semidefinite programs (SDPs). The polytopes of interest arise from abelian groups in a natural way and include regular polygons in the plane, parity polytopes, certain cyclic polytopes, and cut polytopes. These polytopes provide excellent examples through which we can examine general questions about the relative expressive power of semidefinite programming vs linear programming, the role of symmetry in semidefinite descriptions, and differences in the size of descriptions based on the Lasserre hierarchy and those based on general SDPs.

In this talk I will describe a combinatorial way to construct such semidefinite descriptions for this family of polytopes. When applied to the nth cut polytope we obtain a proof that the Lasserre hierarchy is exact after \lceil n/2\rceil rounds (establishing a conjecture of Laurent). When applied to trigonometric cyclic polytopes we obtain semidefinite descriptions that are significantly smaller than any possible linear programming descriptions of these polytopes.

Based on joint work with Hamza Fawzi and Pablo Parrilo

A Three-Operator Splitting Scheme and its Optimization Applications, Damek Davis

Tuesday, October 27, 2015, 4:00 pm
GUG 204

Damek Davis, Department of Mathematics, University of California, Los Angeles .
A Three-Operator Splitting Scheme and its Optimization Applications

Abstract: For over 50 years, operator-splitting schemes have been used to solve PDE, feasibility problems, and more recently, large-scale problems in data analysis. Despite much development, it is known that most existing splitting schemes reduce to one of three basic schemes, each introduced between 15 and 36 years ago.

We introduce a new splitting scheme that extends the Douglas-Rachford and forward-backward splitting schemes to monotone inclusions with three operators, one of which is cocoercive. We discuss why this algorithm works, derive several special cases, including a simple three-block ADMM algorithm, and introduce an acceleration that achieves the optimal rate of convergence for strongly monotone inclusions. Finally, we discuss several applications and future research directions.

An Interior Point Approach for Data Science, Sasha Aravkin

Tuesday, October 20, 2015, 4:00 pm
GUG 204

Sasha Aravkin, University of Washington .
An Interior Point Approach for Data Science

Abstract: Many important applications can be formulated as large-scale optimization problems, including classification in machine learning, data assimilation in weather prediction, inverse problems, and medical and seismic imaging. While first-order methods have proven widely successful in recent years, recent developments suggest that matrix-free second-order methods, such as interior-point methods, can be competitive.

We first develop a modeling framework for a wide range of problems, and show how conjugate representations can be exploited to design an interior point approach for this class. We then show several applications, with emphasis on modeling and problem structure, and end by discussing extensions to large-scale problems.

Michael Overton; Nonsmooth, Nonconvex Optimization: Algorithms and Examples

CORE Series
Tuesday, October 13, 2015, 4:00 pm
Raitt Hall (RAI), Room 121
Michael Overton, New York University .
Nonsmooth, Nonconvex Optimization: Algorithms and Examples

Abstract: In many applications one wishes to minimize an objective function that is not convex and is not differentiable at its minimizers.
We discuss two algorithms for minimization of nonsmooth, nonconvex functions. Gradient Sampling is a simple method that, although computationally intensive, has a nice convergence theory.The method is robust and the convergence theory has recently been extended to constrained problems. BFGS is a well known method, developed for smooth problems, but which is remarkably effective for nonsmooth problems too. Although our theoretical results in the nonsmooth case are quite limited, we have made some remarkable empirical observations and have had broad success in applications. Limited Memory BFGS is a popular extension for large problems, and it is also applicable to the nonsmooth case, although our experience with it is more mixed.
Throughout the talk we illustrate the ideas through examples,
some very easy and some very challenging. Our work is
with Jim Burke (U. Washington) and Adrian Lewis (Cornell).

Bio: Michael L. Overton is Professor of Computer Science and Mathematics at the Courant Institute of Mathematical Sciences, New York University. He received his Ph.D. in Computer Science from Stanford University in 1979.
He is a fellow of SIAM (Society for Industrial and Applied Mathematics) and of the IMA (Institute of Mathematics and its Applications, UK). He served on the Council and Board of Trustees of SIAM from 1991 to 2005, including a term as Chair of the Board from 2004 to 2005. He served as Editor-in-Chief of SIAM Journal on Optimization from 1995 to 1999 and of the IMA Journal of Numerical Analysis from 2007 to 2008, and was the Editor-in-Chief of the MPS(Mathematical Programming Society)-SIAM joint book series from 2003 to 2007. He is currently an editor of SIAM Journal on Matrix Analysis and Applications, IMA Journal of Numerical Analysis, Foundations of Computational Mathematics, and Numerische Mathematik. His research interests are at the interface of optimization and
linear algebra, especially nonsmooth optimization problems involving eigenvalues, pseudospectra, stability and robust control. He is the author of “Numerical Computing with IEEE Floating Point Arithmetic” (SIAM, 2001).

Fall 2015 Calendar

October 13 [CORE]
Michael Overton, Courant Institute of Mathematical Sciences, New York University
Non­smooth, Non­con­vex Opti­miza­tion: Algo­rithms and Examples

October 20
Sasha Aravkin, Department of Applied Mathematics, University of Washington
An Interior Point Approach for Data Science

October 27
Damek Davis, Department of Mathematics, University of California, Los Angeles
A Three-Operator Splitting Scheme and its Optimization Applications

November 10
James Saunderson, Department of Electrical Engineering, University of Washington
Semidefinite descriptions of regular polygons (and beyond)

November 24 [CORE]
Pedro Domingos, Department of Computer Science and Engineering, University of Washington
Nonsmooth, Nonconvex Optimization: Algorithms and Examples

December 8
Victor Falgas-Ravry, Department of Mathematics, Vanderbilt University
Finding large full subgraphs