*Bernd Sturmfels, MPI Leipzig / UC Berkeley*

*Tuesday February 18, 2020, BAG 154, 4-5pm*

TITLE: 3264 Conics in A Second

ABSTRACT: This lecture spans a bridge from 19th century geometry to 21st century applications.

We start with a classical theme that was featured in the January 2020 issue of the Notices of

the American Mathematical Society, namely the 3264 conics that are tangent to five given conics

in the plane. We discuss the computation of these conics, by solving a system of polynomial

equations with numerical methods that are fast and reliable. We conclude with a problem in

statistics, namely maximum likelihood estimation for linear Gaussian covariance models

Alekh Agarwal*, Microsoft Research AI
*

Title: Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes

Abstract: Policy gradient methods are among the most effective methods in challenging reinforcement learning problems with large state and/or action spaces. However, little is known about even their most basic theoretical convergence properties, including: 1) if and how fast they converge to a globally optimal solution (say with a sufficiently rich policy class); 2) how they cope with approximation error due to using a restricted class of parametric policies; or 3) their finite sample behavior. In this talk, we will study all these issues, and provide a broad understanding of when first-order approaches to direct policy optimization in RL succeed. We will also identify the relevant notions of policy class expressivity underlying these guarantees in the approximate setting. Throughout, we will also highlight the interplay of exploration with policy optimization, both in our upper bounds and illustrative lower bounds. This talk is based on joint work with Sham Kakade, Jason Lee and Gaurav Mahajan. Please see https://arxiv.org/pdf/1908.00261 for details.

Bio: Alekh Agarwal is a Principal Research Manager at Microsoft Research where he has been since 2012. His research broadly focuses on designing theoretically sound and practically useful techniques for sequential decision making problems. Within this context, he has worked on areas such as bandits, active learning and most recently reinforcement learning. He has also worked on several other aspects of machine learning including convex optimization, high-dimensional statistics and large-scale machine learning in distributed settings.

Marco Cuturi*, Google Brain, Paris
*

Title: Computational Optimal Transport

Abstract: I will give in this talk a short introduction to optimal

transport theory, a fecund field at the intersection of analysis,

probability and PDEs crowned by two Field’s medals (Villani,’10 and

Figalli,’18) and which has now found applications in data-sciences,

notably biology, graphics, NLP and imaging. I will explain how these

intuitive tools require some adaptation before being used in

high-dimensional settings, and present computational strategies to

cope with the challenges of scale arising from the use of this theory

on real-life problems.

Cynthia Vinzant*, Dept. of Mathematics, North Carolina State University
*

Title: Determinants, polynomials, and matroids

Abstract: The determinant of a symmetric matrix is a fundamental object in mathematics, whose discrete and functional properties have applications across the scientific disciplines. The determinant of a matrix is also a real polynomial in its entries. Hyperbolic polynomials and, more generally, log-concave polynomials are real polynomials that share many useful functional properties of determinants. Like real-rooted univariate polynomials, they also have interesting combinatorial applications. I will discuss the beautiful real and combinatorial geometry underlying these polynomials and applications of this theory to counting and sampling problems involving combinatorial objects called matroids. This is based on joint work with Nima Anari, Kuikui Liu, and Shayan Oveis Gharan.

CORE Series

Amir Ali Ahmadi, *ORFE, Princeton University*

**Friday, May 17, 2019**

**ECE 037, 2:30-3:30pm**

*Title:

Nonnegative polynomials: from optimization to control and learning

*Speaker:

Amir Ali Ahmadi

Princeton, ORFE

*Abstract:

The problem of recognizing nonnegativity of a multivariate polynomial

has a celebrated history, tracing back to Hilbert’s 17th problem. In

recent years, there has been much renewed interest in the topic

because of a multitude of applications in applied and computational

mathematics and the observation that one can optimize over an

interesting subset of nonnegative polynomials using “sum of squares

optimization”.

In this talk, we give a brief overview of some of our recent

contributions to this area. In part (i), we propose more scalable

alternatives to sum of squares optimization and show how they impact

verification problems in control and robotics. Our algorithms do not

rely on semidefinite programming, but instead use linear programming,

or second-order cone programming, or are altogether free of

optimization. In particular, we present the first Positivstellensatz

that certifies infeasibility of a set of polynomial inequalities

simply by multiplying certain fixed polynomials together and checking

nonnegativity of the coefficients of the resulting product.

In part (ii), we study the problem of learning dynamical systems from

very limited data but in presence of “side information”, such as

physical laws or contextual knowledge. This is motivated by

safety-critical applications where an unknown dynamical system needs

to be controlled after a very short learning phase where a few of its

trajectories are observed. (Imagine, e.g., the task of autonomously

landing a passenger airplane that has gone through sudden wing

damage.) We show that sum of squares and semidefinite optimization are

particularly suited for exploiting side information in order to assist

the task of learning when data is limited. Joint work with A. Majumdar

and G. Hall (part (i)) and with B. El Khadir (part (ii)).

*Bio:

Amir Ali Ahmadi ( http://aaa.princeton.edu/ ) is a Professor at the

Department of Operations Research and Financial Engineering at

Princeton University and an Associated Faculty member of the Program

in Applied and Computational Mathematics, the Department of Computer

Science, the Department of Mechanical and Aerospace Engineering, and

the Center for Statistics and Machine Learning. Amir Ali received his

PhD in EECS from MIT and was a Goldstine Fellow at the IBM Watson

Research Center prior to joining Princeton. His research interests are

in optimization theory, computational aspects of dynamics and control,

and algorithms and complexity. Amir Ali’s distinctions include the

Sloan Fellowship in Computer Science, a MURI award from the AFOSR, the

NSF CAREER Award, the AFOSR Young Investigator Award, the DARPA

Faculty Award, the Google Faculty Award, the Howard B. Wentz Junior

Faculty Award as well as the Innovation Award of Princeton University,

the Goldstine Fellowship of IBM Research, and the Oberwolfach

Fellowship of the NSF. His undergraduate course at Princeton (ORF 363,

“Computing and Optimization’’) has received the 2017 Excellence in

Teaching of Operations Research Award of the Institute for Industrial

and Systems Engineers and the 2017 Phi Beta Kappa

Rainer Sinn*, Fachbereich Mathematik und Informatik, Freie Universität Berlin
*

*Tuesday May 14, 2019, More Hall 234, 4pm-5:30pm*

Title: From conic programming to real algebraic geometry and back

Abstract: In this talk, we will present interactions between conic programming, a branch of optimization, and real algebraic geometry, the algebraic and geometric study of real polynomial systems. This has led to a number of beautiful mathematical theorems, both improving our understanding of the geometric picture as well as sharpening our tools for applications.

On the optimization side, we will take a look at hyperbolic programs, an instance of conic programs first studied in the 1990s that comprises linear programs as well as semidefinite programs. On the algebraic side, it is based on the theory of hyperbolic respectively real stable polynomials, which show up in many parts of mathematics, ranging from control theory to combinatorics. In many ways, hyperbolic polynomials behave like determinants of families of symmetric matrices. We will compare these two paradigms, one based in matrix calculus, the other purely in algebraic geometry, through a series of examples, pictures, theorems, and conjectures.

Bios:

Daniel Plaumann has been associate professor of algebra and its applications at TU Dortmund, Germany, since 2016. His research interests are real and classical algebraic geometry, positive polynomials, matrix inequalities, and applications to optimization and functional analysis. He received his PhD in 2008 from the University of Konstanz under the supervision of Claus Scheiderer. In 2010, he was Feodor Lynen Fellow of the Alexander von Humboldt Foundation at UC Berkeley and in 2014 Research Fellow at the Nanyang Technological University in Singapore. From 2013-2016 he was Research Felllow at the Zukunftskolleg of the University of Konstanz. He is currently a long-term visitor at the Simons Institute for the Theory of Computing in Berkeley.

Rainer Sinn has been assistant professor for discrete geometry at Freie Universität Berlin, Germany, since 2017. His research interests are discrete, convex, and real algebraic geometry. He graduated from the University of Konstanz in 2014 under the supervision of Claus Scheiderer with a fellowship of the German National Academic Foundation. In 2014, before moving to the Georgia Institute of Technology for a postdoc position with Greg Blekherman, he was a visiting researcher at the National Institute of Mathematical Sciences in Daejeon, Korea, during a Thematic Program on Applied Algebraic Geometry. In 2017, he was at the Max Planck Institute for Mathematics in the Sciences in Leipzig as a member of the Nonlinear Algebra group led by Bernd Sturmfels. He is currently a research fellow at the Simons Institute for the Theory of Computing in Berkeley as part of the program on Geometry of Polynomials.

CORE Series

Jesús De Loera, *UC Davis*

**Friday, November 30, 2018**

**MEB 248, 2:30pm**

**TITLE: **Variations on a theme by G. Dantzig: Revisiting the principles of the Simplex Algorithm

**ABSTRACT:
**Linear programs (LPs) are, without any doubt, at the core of both the theory and the practice of modern applied and computational optimization (e.g., in discrete optimization LPs are used in practical computations using branch-and-bound, and in approximation algorithms, e.g., in rounding schemes). Fast algorithms are indispensable.

George Dantzig’s simplex method is one of the most famous algorithms to solve LPs and SIAM even elected it as one of the top 10 most influential algorithms of the 20th Century. But despite its key importance, many simple easy-to-state mathematical properties of the simplex method and its geometry remain unknown. The geometry of the simplex method is a topic in the convex-combinatorial geometry of polyhedra. Perhaps the most famous geometric-combinatorial challenge is to determine a worst-case upper bound for the graph diameter of polyhedra.

In this talk, I will look at how abstractions of the simplex method provide useful insight into the properties of this famous algorithm. The first type of abstraction is to remove coordinates entirely and is related to combinatorial topology, the second is related to generalizing the pivoting moves. This survey lecture includes joint work with Steve Klee, Raymond Hemmecke, and Jon Lee.

**BIO:
**Jesús A. De Loera received his Bachelor of Science degree in Mathematics from the National University of Mexico in 1989, and a Ph.D in Applied Mathematics from Cornell University in 1995. He arrived at UC Davis in 1999, where he is now a professor of Mathematics, as well as a member of the Graduate groups in Computer Science and Applied Mathematics. He has held visiting positions at the University of Minnesota, the Swiss Federal Technology Institute (ETH Zürich), the Mathematical Science Institute at Berkeley (MSRI), Universität Magdeburg (Germany), the Institute for Pure and Applied Mathematics at UCLA (IPAM), the Newton Institute of Cambridge Univ. (UK), and the Technische Universität München.

His research covers a wide range of topics, including Combinatorics, Algorithms, Convex Geometry, Applied Algebra, and Optimization. In 2004 he received an Alexander von Humboldt Fellowship and won the 2010 INFORMS computer society prize for his work in algebraic algorithms in Optimization. For his contributions to Discrete Geometry and Combinatorial Optimization, as well as for service to the profession, including mentoring and diversity, he was elected a fellow of the American Mathematical Society in 2014. For his mentoring and teaching he received the 2013 Chancellor’s award for mentoring undergraduate research and, in 2017, the Mathematical Association of America Golden Section Teaching Award. He has supervised twelve Ph.D students, and over 50 undergraduates research projects. He is currently an associate editor for *SIAM Journal of Discrete Mathematics, SIAM Journal of **Applied Algebra and Geometry*, and the *Boletin de la Sociedad **Matematica Mexicana.*