Spring 2017 Calendar

Apr 11 [CORE]
Liza Levina, Department of Statistics, University of Michigan
Interpretable Prediction Models for Network-Linked Data

Apr 18 [CORE]
Zaid Harchaoui, Department of Statistics, University of Washington
Catalyst, Generic Acceleration Scheme for Gradient-based Optimization

Apr 25
Andrew Pryhuber, Department of Mathematics, University of Washington
A QCQP Approach for Triangulation

May 2
Scott Roy, Department of Mathematics, University of Washington
An Optimal First-order Method Based on Optimal Quadratic Averaging

May 9
Peng Zheng, Department of Applied Mathematics, University of Washington
What’s the shape of your penalty?

May 30
Kellie MacPheeDepartment of Mathematics, University of Washington

Jun 1
Madeleine UdellDept. of Operations Research and Information Engineering, Cornell University
Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage

Jun 6
Hongzhou Lin, Inria Grenoble

Peng Zheng; What’s the shape of your penalty?

May 9, 2017, 4pm
PDL C-401
Peng Zheng, Department of Applied Mathematics, University of Washington

Abstract: Performance of machine learning approaches is strongly influenced by choice of misfit penalty, and correct settings of penalty parameters, such as the threshold of the Huber function. These parameter are typically chosen using expert knowledge, cross-validation, or black-box optimization, which are time consuming for large-scale applications.

We present a data-driven approach that simultaneously solves inference problems and learns error structure and penalty parameters. We discuss theoretical properties of these joint problems, and present algorithms for their solution. We show numerical examples from the piecewise linear-quadratic (PLQ) family of penalties.

Scott Roy; An Optimal First-order Method Based on Optimal Quadratic Averaging

May 1, 2017, 4pm
PDL C-401
Scott Roy, Department of Mathematics, University of Washington

Abstract: In a recent paper, Bubeck, Lee, and Singh introduced a new first order method for minimizing smooth strongly convex functions. Their geometric descent algorithm, largely inspired by the ellipsoid method, enjoys the optimal linear rate of convergence. We show that the same iterate sequence is generated by a scheme that in each iteration computes an optimal average of quadratic lower-models of the function. Indeed, the minimum of the averaged quadratic approaches the true minimum at an optimal rate. This intuitive viewpoint reveals clear connections to the original fast-gradient methods and cutting plane ideas, and leads to limited-memory extensions with improved performance.

Joint work with Dmitriy Drusvyatskiy and Maryam Fazel.

Andrew Pryhuber; A QCQP Approach for Triangulation

Apr 25, 2017, 4pm
PDL C-401
Andrew Pryhuber, Department of Mathematics, University of Washington

Abstract: Reconstruction of a 3D world point from $n\geq 2$ noisy 2D images is referred to as the triangulation problem and is fundamental in multi-view geometry. We show how this problem can be formulated as a quadratically constrained quadratic program and discuss an algorithm to construct candidate solutions. We also present a polynomial time test motivated by the underlying geometry of the triangulation problem to confirm optimality of such a solution. Based on work by Chris Aholt, Sameer Agarwal, and Rekha Thomas.

Zaid Harchaoui; Catalyst, Generic Acceleration Scheme for Gradient-based Optimization

CORE Series
Tuesday, April 18, 2017
EEB 125, 4:00-5:00PM 
Zaid HarchaouiUniversity of Washington

TITLE: Catalyst, Generic Acceleration Scheme for Gradient-based Optimization

ABSTRACT: We introduce a generic scheme called Catalyst for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm. The proposed approach consists of minimizing a convex objective by approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. This strategy applies to a large class of algorithms, including gradient descent, block coordinate descent, SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. Furthermore, the approach can be extended to venture into possibly nonconvex optimization problems without sacrificing the rate of convergence to stationary points. We present experimental results showing that the Catalyst acceleration scheme is effective in practice, especially for ill-conditioned problems where we measure significant improvements.

BIO: Zaid Harchaoui is currently a Provost’s Initiative in Data-driven Discovery Assistant Professor in the Department of Statistics and a Data Science Fellow in the eScience Institute at University of Washington. He completed his Ph.D. at ParisTech (now in Univ. Paris-Saclay), working with Eric Moulines, Stephane Canu and Francis Bach. Before joining the University of Washington, he was a visiting assistant professor at the Courant Institute for Mathematical Sciences at New York University (2015 – 2016). Prior to this, he was a permanent researcher on the LEAR team of Inria (2010 – 2015). He was a postdoctoral fellow in the Robotics Institute of Carnegie Mellon University in 2009.

He received the Inria award for scientific excellence and the NIPS reviewer award. He gave a tutorial on “Frank-Wolfe, greedy algorithms, and friends” at ICML’14, on “Large-scale visual recognition” at CVPR’13, and on “Machine learning for computer vision” at MLSS Kyoto 2015. He recently co-organized the “Future of AI” symposium at New York University, the workshop on “Optimization for MachineLearning” at NIPS’14, and the “Optimization and statistical learning” workshop in 2015 and 2013 in Ecole de Physique des Houches (France). He served/will serve as Area Chair for ICML 2015, ICML 2016, NIPS 2016, ICLR 2016. He is currently an associate editor of IEEE Signal Processing Letters.

Madeleine Udell; Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage

Jun 1, 2017, 3:30pm
Location TBA
Madeleine Udell, Department of Operations Research and Information Engineering, Cornell University

Abstract: In this talk, we consider a fundamental class of convex matrix optimization problems with low-rank solutions. We show it is possible to solve these problem using far less memory than the natural size of the decision variable when the problem data has a concise representation. Our proposed method, SketchyCGM, is the first algorithm to offer provable convergence to an optimal point with an optimal memory footprint. SketchyCGM modifies a standard convex optimization method — the conditional gradient method — to work on a sketched version of the decision variable, and can recover the solution from this sketch. In contrast to recent work on non-convex methods for this problem class, SketchyCGM is a convex method; and our convergence guarantees do not rely on statistical assumptions.

Liza Levina; Interpretable Prediction Models for Network-Linked Data

CORE Series
Tuesday, April 11, 2017
EEB 125, 4:00-5:00PM 
Liza LevinaUniversity of Michigan

TITLE: Interpretable Prediction Models for Network-Linked Data

ABSTRACT: Prediction problems typically assume the training data are independent samples, but in many modern applications samples come from individuals connected by a network. For example, in adolescent health studies of risk-taking behaviors, information on the subjects’ social networks is often available and plays an important role through network cohesion, the empirically observed phenomenon of friends behaving similarly. Taking cohesion into account should allow us to improve prediction. Here we propose a regression-based framework with a network penalty on individual node effects to encourage similarity between predictions for linked nodes, and show that it outperforms traditional models both theoretically and empirically when network cohesion is present. The framework is easily extended to other models, such as the generalized linear model and Cox’s proportional hazard model. Applications to predicting teenagers’ behavior based on both demographic covariates and their friendship networks from the AddHealth data are discussed in detail.

BIO: Liza Levina received her PhD in Statistics from UC Berkeley in 2002 and joined the University of Michigan the same year.  Her research interestsinclude networks, high-dimensional data, and sparsity.  She has worked on estimating large covariance matrices,
graphical models, and other topics in inference for high-
dimensional data.   She also works on statistical inference for network data, including problems of community detectiona
nd link prediction.   Her research covers methodology, theory, and applications, especially to spectroscopy, remote sensing and, in the past, computer vision. She received the junior Noether Award from the ASA in 2010 and was elected a member of ISI in 2011.

Abe Engle; Local Convergence Rates of a Gauss-Newton Method for Convex Compositions

Mar 14, 2017, 4pm
PDL C-401
Abe Engle, Department of Mathematics, University of Washington

Abstract: We discuss a Gauss-Newton methodology for minimizing convex compositions of smooth functions. We analyze current local rates of quadratic convergence when the subproblems are exactly solved and propose inexact methods that relax current sharpness assumptions while maintaining speeds of convergence.