Category Archives: Spring 2015

Jonathan Kelner; Bridging the Numerical and the Combinatorial: Emerging Tools, Techniques, and Design Principles for Graph Algorithms

CORE Series
May 19, 2015, 4:00pm
EEB 125
Jonathan Kelner, Department of Mathematics, MIT.
Bridging the Numerical and the Combinatorial: Emerging Tools, Techniques, and Design Principles for Graph Algorithms

Abstract: Flow and cut problems on graphs are among the most fundamental and extensively studied problems in Operations Research and Optimization, playing a foundational role in both the theory and practice of algorithm design. While the classical algorithms for these problems were largely based on combinatorial approaches, the past several years have witnessed the emergence of a powerful collection of new techniques based on geometry, spectral graph theory, computational linear algebra, randomization, and iterative methods from continuous optimization. These numerical techniques have allowed researchers to provide better provable algorithms for a wide range of graph problems, in some cases breaking algorithmic barriers that had stood for several decades.

The relationship between graph theory and numerical analysis has proven fruitful in the other direction as well. In addition to providing numerical techniques for graph algorithms, it has given rise to new combinatorial techniques for computational linear algebra. In particular, it has led to fast algorithms for Laplacian linear systems, which have broad applications in numerical scientific computing, machine learning, operations research, computer vision, and network analysis, among others.

In this talk, I discuss some of the recurring themes that arise in this confluence of fields. I will apply these to sketch algorithms that run in close to linear time for two basic algorithmic problems: solving Laplacian linear systems and finding approximately maximum flows on undirected graphs.

The talk will be based on joint work with Yin Tat Lee, Aaron Sidford, Lorenzo Orecchia, and Zeyuan Zhu.

Bio: Jonathan Kelner is an Associate Professor of Applied Mathematics in the MIT Department of Mathematics and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research focuses on the application of techniques from pure mathematics to the solution of fundamental problems in algorithms and complexity theory. He was an undergraduate at Harvard University, and he received his Ph.D. in Computer Science from MIT in 2006. Before joining the MIT faculty, he spent a year as a member of the Institute for Advanced Study. He has received a variety of awards for his work, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, the Best Paper Awards at STOC 2011, SIGMETRICS 2011, and SODA 2014, the Harold E. Edgerton Faculty Achievement Award, and the MIT School of Science Teaching Prize.

Frank Permenter; Partial facial reduction: simplified, equivalent SDPs via approximations of the PSD cone

May 5, 2015, 2:00pm
THO 119
Frank Permenter, EECS Department, MIT.
Partial facial reduction: simplified, equivalent SDPs via approximations of the PSD cone

Abstract: Semidefinite programs (SDPs) arising in practice frequently have no interior and hence can be reformulated as smaller problems more easily solved. Unfortunately, finding these reformulations requires executing a facial reduction procedure, which is typically computationally expensive. In this talk, we develop a practical facial reduction procedure based on computationally efficient approximations of the positive semidefinite cone. The proposed method simplifies SDPs with no interior by solving a sequence of easier optimization problems (e.g. linear programs) and could be a useful pre-processing step for SDP solvers. We demonstrate effectiveness of the method on SDPs arising in practice, and describe our publicly-available software implementation. We also give a post-processing procedure for dual solution recovery that generally applies to facial-reduction-based pre-processing techniques. Joint work with Pablo Parrilo.

Joel Tropp; Applied Random Matrix Theory

CORE Series
Tuesday, April 28, 2015, 4pm
EEB 125
Joel Tropp, Caltech.
Applied Random Matrix Theory

Abstract: Random matrices now play a role in many areas of theoretical, applied, and computational mathematics. Therefore, it is desirable to have tools for studying random matrices that are flexible, easy to use, and powerful. Over the last fifteen years, researchers have developed a remarkable family of results, called matrix concentration inequalities, that balance these criteria.

This talk offers an invitation to the field of matrix concentration inequalities. The presentation begins with some history of random matrix theory, and it introduces an important probability inequality for scalar random variables. It describes a flexible model for random matrices that is suitable for many problems, and it discusses one of the most important matrix concentration results, the matrix Bernstein inequality. The talk concludes with some applications drawn from algorithms, combinatorics, statistics, signal processing, scientific computing, and beyond.

Bio: Joel A. Tropp is Professor of Applied & Computational Mathematics at the California Institute of Technology. He earned his PhD degree in Computational Applied Mathematics from the University of Texas at Austin in 2004. Dr. Tropp’s work lies at the interface of applied mathematics, electrical engineering, computer science, and statistics. This research concerns the theoretical and computational aspects of data analysis, sparse modeling, randomized linear algebra, and random matrix theory. Dr. Tropp has received several major awards for young researchers, including the 2007 ONR Young Investigator Award and the 2008 Presidential Early Career Award for Scientists and Engineers. He is also the winner of the 6th Vasil A. Popov prize and the 2011 Monroe H. Martin prize.

Andreas Griewank; Lipschitzian Piecewise Smooth Optimization

April 14, 2015, 4:00pm
GUG 220
Andreas Griewank, Institut für Mathematik, Humboldt University of Berlin.
Lipschitzian Piecewise Smooth Optimization

Abstract: We address the problem of minimizing objectives from the class of piecewise differentiable functions whose nonsmoothness can be encapsulated in the absolute value function. They possess local piecewise linear approximations with a discrepancy that can be bounded by a quadratic proximal term. This overestimating local model is continuous but generally nonconvex. It can be generated in its {\em abs-normal-form} by a minor extension of standard algorithmic differentiation tools. Here we demonstrate how the local model can be minimized by a bundle type method, which benefits from the availability of additional {\em gray-box-information} via the abs-normal form. In the convex case our algorithms realizes the consistent steepest descent trajectory for which finite convergence was established by Hirriart Urruty and Claude Lemarechal, specifically covering counter examples where steepest descent with exact line-search famously fails.

Sebastien Bubeck; The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

May 12, 2015, 4pm
GUG 220
Sebastien Bubeck, Microsoft Research, Redmond.
The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

Abstract: A fundamental result in the theory of Interior Point Methods is Nesterov and Nemirovski’s construction of a universal self-concordant barrier. In this talk I will introduce the entropic barrier, a new (and in some sense optimal) universal self-concordant barrier. The entropic barrier connects many topics of interest in Machine Learning: exponential families, convex duality, log-concave distributions, Mirror Descent, and exponential weights.

Spring 2015 Calendar

March 30 [CORE]
Andrew Fitzgibbon, Microsoft Research, Cam­bridge.
Learning about Shape

April 14
Andreas Griewank, Institut für Mathematik, Humboldt University of Berlin.
Lipschitzian Piecewise Smooth Optimization

April 28 [CORE]
Joel Tropp, Department of Computing and Mathematical Sciences, Caltech.
Applied Random Matrix Theory

May 5
Frank Per­me­nter, EECS Depart­ment, MIT.
Par­tial facial reduc­tion: sim­pli­fied, equiv­a­lent SDPs via approx­i­ma­tions of the PSD cone

May 12
Sebastian Bubeck, MSR.
The Entropic Barrier: A New and Optimal Universal Self-concordant Barrier

May 19 [CORE]
Jonathan Kelner, MIT.
Bridging the Numerical and the Combinatorial: Emerging Tools, Techniques, and Design Principles for Graph Algorithms

Click on the title or see below for the abstracts.

Andrew Fitzgibbon; Learning about Shape

CORE Series
Monday, March 30, 2015, 4pm
EEB 125
Andrew Fitzgibbon, Microsoft Research, Cambridge.
Learning about Shape

Abstract: Vision is naturally concerned with shape. If we could recover a stable and compact representation of object shape from images, we would hope it might aid with numerous vision tasks. Just the silhouette of an object is a strong cue to its identity, and the silhouette is generated by its 3D shape. In computer vision, many representations have been explored: collections of points, “simple” shapes like ellipsoids or polyhedra, algebraic surfaces and other implicit surfaces, generalized cylinders and ribbons, and piecewise (rational) polynomial representations like NURBS and subdivision surfaces. Many of these can be embedded more or less straightforwardly into probabilistic shape spaces, and recovery (a.k.a. “learning”) of one such space is the goal of the experimental part of this talk.

When recovering shape from measurements, there is at first sight a natural hierarchy of stability: straight lines can represent very little but may be robustly recovered from data, then come conic sections, splines with fixed knots, and general piecewise representations. I will show, however, that one can pass almost immediately to piecewise representations without loss of robustness. In particular, I shall show how a popular representation in computer graphics—subdivision curves and surfaces—may readily be fit to a variety of image data using the technique for ellipse fitting introduced by Gander, Golub, and Strebel in 1994. I show how we can address the previously-difficult problem of recovering 3D shape from multiple silhouettes, and the considerably harder problem which arises when the silhouettes are not from the same object instance, but from members of an object class, for example 30 images of different dolphins each in different poses. This requires that we simultaneously learn the shape space and the projections from each instance into its image. This simultaneous optimization is reminiscent of the bundle adjustment problem in computer vision, and indeed our most recent application, to tracking the human hand, makes good use of the Ceres Solver.

Joint work with Tom Cashman and others.

Bio: Andrew Fitzgibbon is a principal researcher in the computer vision group at Microsoft Research Cambridge. He is best known for his work on 3D vision, having been a core contributor to the Emmy-award-winning 3D camera tracker “boujou” (www.boujou.com) and to the human body tracking of Kinect for Xbox 360, but his interests are broad, spanning computer vision, graphics, machine learning, and even a little neuroscience. He has published numerous highly-cited papers, and received many awards for his work, including 9 conference prizes for best paper or runner-up, the Silver medal of the Royal Academy of Engineering, and the British Computer Society’s Roger Needham award. He is a fellow of the Royal Academy of Engineering, the British Computer Society, and the International Association for Pattern Recognition. He has been program chair of CVPR and ECCV. Before joining Microsoft in 2005, he was a Royal Society University Research Fellow at Oxford University, having previously studied at Edinburgh University, Heriot-Watt University, and University College, Cork.