Category Archives: Fall 2018

Jesús De Loera; Variations on a theme by G. Dantzig: Revisiting the principles of the Simplex Algorithm

CORE Series
Jesús De Loera, UC Davis
Friday, November 30, 2018
MEB 248, 2:30pm

TITLE: Variations on a theme by G. Dantzig: Revisiting the principles of the Simplex Algorithm

Linear programs (LPs) are, without any doubt, at the core of both the theory and the practice of modern applied and computational optimization (e.g., in discrete optimization LPs are used in practical computations using branch-and-bound, and in approximation algorithms, e.g., in rounding schemes).  Fast algorithms are indispensable.

George Dantzig’s simplex method is one of the most famous algorithms to solve LPs and SIAM even elected it as one of the top 10 most influential algorithms of the 20th Century. But despite its key importance, many simple easy-to-state mathematical properties of the simplex method and its geometry remain unknown. The geometry of the simplex method is a topic in the convex-combinatorial geometry of polyhedra. Perhaps the most famous geometric-combinatorial challenge is to determine a worst-case upper bound for the graph diameter of polyhedra.

In this talk, I will look at how abstractions of the simplex method provide useful insight into the properties of this famous algorithm. The first type of abstraction is to remove coordinates entirely and is related to combinatorial topology, the second is related to generalizing the pivoting moves. This survey lecture includes joint work with Steve Klee, Raymond Hemmecke, and Jon Lee.

Jesús A. De Loera received his Bachelor of Science degree in Mathematics from the National University of Mexico in 1989, and a Ph.D in Applied Mathematics from Cornell University in 1995. He arrived at UC Davis in 1999, where he is now a professor of Mathematics, as well as a member of the Graduate groups in Computer Science and Applied Mathematics. He has held visiting positions at the University of Minnesota, the Swiss Federal Technology Institute (ETH Zürich), the Mathematical Science Institute at Berkeley (MSRI), Universität Magdeburg (Germany), the Institute for Pure and Applied Mathematics at UCLA (IPAM), the Newton Institute of Cambridge Univ. (UK), and the Technische Universität München.

His research covers a wide range of topics, including Combinatorics, Algorithms, Convex Geometry, Applied Algebra, and Optimization. In 2004 he received an Alexander von Humboldt Fellowship and won the 2010 INFORMS computer society prize for his work in algebraic algorithms in Optimization.  For his contributions to Discrete Geometry and Combinatorial Optimization, as well as for service to the profession, including mentoring and diversity, he was elected a fellow of the American Mathematical Society in 2014. For his mentoring and teaching he received the 2013 Chancellor’s award for mentoring undergraduate research and, in 2017, the Mathematical Association of America Golden Section Teaching Award. He has supervised twelve Ph.D students, and over 50 undergraduates research projects. He is currently an associate editor for SIAM Journal of Discrete Mathematics, SIAM Journal of Applied Algebra and Geometry, and the Boletin de la Sociedad Matematica Mexicana.

Francis Bach; Can machine learning survive the artificial intelligence revolution? 

CORE Series
Francis Bach, Inria and Ecole Normale Supérieure
Thursday, November 8, 2018
Electrical Engineering Building (EEB) 105, 11:00am

Poster PDF

TITLE: Can machine learning survive the artificial intelligence revolution?

Data and algorithms are ubiquitous in all scientific, industrial and personal domains. Data now come in multiple forms (text, image, video, web, sensors, etc.), are massive, and require more and more complex processing beyond their mere indexation or the computation of simple statistics, such as recognizing objects in images or translating texts. For all of these tasks, commonly referred to as artificial intelligence (AI), significant recent progress has allowed algorithms to reach performances that were deemed unreachable a few years ago and that make these algorithms useful to everyone.

Many scientific fields contribute to AI, but most of the visible progress come from machine learning and tightly connected fields such as computer vision and natural language processing. Indeed, many of the recent advances are due to the availability of massive data to learn from, large computing infrastructures and new machine learning models (in particular deep neural networks).

Beyond the well publicized visibility of some advances, machine learning has always been a field characterized by the constant exchanges between theory and practice, with a stream of algorithms that exhibit both good empirical performance on real-world problems and some form of theoretical guarantees. Is this still possible?

In this talk, I will present recent illustrating machine learning successes and propose some answers to the question above.

Francis Bach is the Distinguished Visiting Faculty of the NSF-TRIPODS Algorithmic Foundations of Data Science Institute. The seminar is part of the CORE Seminar Series, the Data Science Seminar Series, and the ML Seminar Series.

Francis Bach is a researcher at Inria, leading since 2011 the machine learning team which is part of the Computer Science Department at Ecole Normale Supérieure. He graduated from Ecole Polytechnique in 1997 and completed his Ph.D. in Computer Science at U.C. Berkeley in 2005, working with Professor Michael Jordan. He spent two years in the Mathematical Morphology group at Ecole des Mines de Paris, then he joined the computer vision project-team at Inria/Ecole Normale Supérieure from 2007 to 2010. Francis Bach is primarily interested in machine learning, and especially in graphical models, sparse methods, kernel-based learning, large-scale convex optimization, computer vision and signal processing. He obtained in 2009 a Starting Grant and in 2016 a Consolidator Grant from the European Research Council, and received the Inria young researcher prize in 2012, the ICML test-of-time award in 2014, as well as the Lagrange prize in continuous optimization in 2018. In 2015, he was program co-chair of the International Conference in Machine Learning (ICML), and general chair in 2018; he is now co-editor-in-chief of the Journal of Machine Learning Research.