CORE Series

**Tuesday, April 18, 2017**

**EEB 125, 4:00-5:00PM **

**Zaid Harchaoui, **University of Washington

**TITLE: Catalyst, Generic Acceleration Scheme for Gradient-based Optimization**

ABSTRACT: We introduce a generic scheme called Catalyst for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm. The proposed approach consists of minimizing a convex objective by approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. This strategy applies to a large class of algorithms, including gradient descent, block coordinate descent, SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. Furthermore, the approach can be extended to venture into possibly nonconvex optimization problems without sacrificing the rate of convergence to stationary points. We present experimental results showing that the Catalyst acceleration scheme is effective in practice, especially for ill-conditioned problems where we measure significant improvements.

BIO: Zaid Harchaoui is currently a Provost’s Initiative in Data-driven Discovery Assistant Professor in the Department of Statistics and a Data Science Fellow in the eScience Institute at University of Washington. He completed his Ph.D. at ParisTech (now in Univ. Paris-Saclay), working with Eric Moulines, Stephane Canu and Francis Bach. Before joining the University of Washington, he was a visiting assistant professor at the Courant Institute for Mathematical Sciences at New York University (2015 – 2016). Prior to this, he was a permanent researcher on the LEAR team of Inria (2010 – 2015). He was a postdoctoral fellow in the Robotics Institute of Carnegie Mellon University in 2009.

He received the Inria award for scientific excellence and the NIPS reviewer award. He gave a tutorial on “Frank-Wolfe, greedy algorithms, and friends” at ICML’14, on “Large-scale visual recognition” at CVPR’13, and on “Machine learning for computer vision” at MLSS Kyoto 2015. He recently co-organized the “Future of AI” symposium at New York University, the workshop on “Optimization for MachineLearning” at NIPS’14, and the “Optimization and statistical learning” workshop in 2015 and 2013 in Ecole de Physique des Houches (France). He served/will serve as Area Chair for ICML 2015, ICML 2016, NIPS 2016, ICLR 2016. He is currently an associate editor of IEEE Signal Processing Letters.