Upcoming Talks 2014

Jan­u­ary 14
Thomas Rothvoss, Depart­ment of Math­e­mat­ics, MIT.
The match­ing poly­tope has expo­nen­tial exten­sion complexity

Jan­u­ary 21
Mary Beth Hribar, Microsoft.
Adven­tures in the Unknown, My Career in Indus­try after an Opti­miza­tion Dissertation

Jan­u­ary 28
Samet Oymak, Depart­ment of Elec­tri­cal Engi­neer­ing, Cal­tech.
A Gen­eral The­ory of Noisy Lin­ear Inverse Problems

Feb­ru­ary 11
Mario Micheli, Depart­ment of Math­e­mat­ics, UW.
An Opti­miza­tion Approach for Image Recov­ery Through Opti­cal Turbulence

Feb­ru­ary 21 [MAC]
Maryam Fazel, Depart­ment of Elec­tri­cal Engi­neer­ing, UW.
Fill­ing In the Gaps: Recov­ery from Incom­plete Information

Feb­ru­ary 25
Marina Meila, Depart­ment of Sta­tis­tics, UW.
Opti­miza­tion for Ordered Data

March 18
Dvi­jotham Krish­na­murthy, Depart­ment of Com­puter Sci­ence and Engi­neer­ing, UW.
Con­vex For­mu­la­tions of Con­troller Synthesis

March 31 [CORE]
David Blei, Depart­ment of Com­puter Sci­ence, Prince­ton Uni­ver­sity.
Prob­a­bilis­tic Topic Mod­els of Text and Users

April 8
Rishabh Iyer, Depart­ment of Elec­tri­cal Engi­neer­ing, UW.
Sub­mod­u­lar Com­bi­na­to­r­ial Prob­lems in Machine Learn­ing: Algo­rithms and Applications

April 15 [CORE]
San­jeev Arora, Depart­ment of Com­puter Sci­ence, Prince­ton Uni­ver­sity.
Over­com­ing Intractabil­ity in Unsu­per­vised Learning

April 22
Hongbo Dong, Depart­ment of Math­e­mat­ics, Wash­ing­ton State Uni­ver­sity.
A New Approach to Relax Non­con­vex Quadratics

April 29
Rina Foygel, Depart­ment of Sta­tis­tics, Uni­ver­sity of Chicago.
Demix­ing sig­nals: the geom­e­try of cor­rupted sensing

May 2–3
West Coast Opti­miza­tion Meeting

May 8 [CORE]
Dan Spiel­man, Depart­ment of Com­puter Sci­ence, Yale Uni­ver­sity.
Spec­tral Spar­si­fi­ca­tion of Graphs

May 9 [MAC]
Dan Spiel­man, Depart­ment of Com­puter Sci­ence, Yale Uni­ver­sity.
Phys­i­cal Metaphors for Graphs and Networks

May 13
Noah Simon, Depart­ment of Bio­sta­tis­tics, Uni­ver­sity of Wash­ing­ton.
Flex­i­ble Sparse Modeling

May 23
Michael Joswig, Insti­tute of Math­e­mat­ics, TU Berlin.
Long and Wind­ing Cen­tral Paths

May 27
Elina Robeva, Depart­ment of Math­e­mat­ics, UC Berke­ley.
Fixed Points of the EM Algo­rithm and Non­neg­a­tive Rank Boundaries

Click on the title or see below for the abstracts.

Elina Robeva; Fixed Points of the EM Algorithm and Nonnegative Rank Boundaries

May 27, 2014, 4:00pm
EEB 303
Elina Robeva, Depart­ment of Math­e­mat­ics, UC Berke­ley.
Fixed Points of the EM Algo­rithm and Non­neg­a­tive Rank Boundaries

Abstract: Matri­ces of non­neg­a­tive rank $r$ rep­re­sent mix­tures of $r$ inde­pen­dent dis­tri­b­u­tions for two ran­dom vari­ables. Like­li­hood infer­ence for this model leads to prob­lems in real alge­braic geom­e­try that we address in this paper. We char­ac­ter­ize the set of fixed points of Expec­ta­tion Max­i­miza­tion, and we study the bound­ary of the space of matri­ces with non­neg­a­tive rank at most $3$. Both of these cor­re­spond to alge­braic vari­eties with many irre­ducible components.

Michael Joswig; Long and Winding Central Paths

May 23, 2014, 3:30pm
LOW 112
Michael Joswig, Insti­tute of Math­e­mat­ics, TU Berlin.
Long and Wind­ing Cen­tral Paths

Abstract: We dis­prove a con­tin­u­ous ana­log of the Hirsch con­jec­ture pro­posed by Deza, Ter­laky and Zinchenko, by con­struct­ing a fam­ily of lin­ear pro­grams with $3r+4$ inequal­i­ties in dimen­sion $2r+2$ where the cen­tral path has a total cur­va­ture in $\Omega(2^r/r)$. Our method is to trop­i­cal­ize the cen­tral path in lin­ear pro­gram­ming. The trop­i­cal cen­tral path is the piecewise-linear limit of the cen­tral paths of para­me­ter­ized fam­i­lies of clas­si­cal lin­ear pro­grams viewed through log­a­rith­mic glasses. We show in par­tic­u­lar that the trop­i­cal ana­logue of the ana­lytic cen­ter is noth­ing but the trop­i­cal barycen­ter, that is, the max­i­mum of a trop­i­cal poly­he­dron. It fol­lows that unlike in the clas­si­cal case, the trop­i­cal cen­tral path may lie on the bound­ary of the trop­i­cal­iza­tion of the fea­si­ble set, and may even coin­cide with a path of the trop­i­cal sim­plex method. Finally, our counter-example is obtained as a defor­ma­tion of a fam­ily of trop­i­cal lin­ear pro­grams intro­duced by Bezem, Nieuwen­huis and Rodr\‘iguez-Carbonell.

This is joint work with Xavier Allami­geon, Pas­cal Benchi­mol and St\‘ephane Gaubert.

Noah Simon; Flexible Sparse Modeling

May 13, 2014, 4:00pm
EEB 303
Noah Simon, Depart­ment of Bio­sta­tis­tics, Uni­ver­sity of Wash­ing­ton.
Flex­i­ble Sparse Modeling

Abstract: Our abil­ity to col­lect data has exploded over recent years. Across sci­ence and busi­ness, we now col­lect thou­sands to mil­lions of mea­sure­ments on each per­son. It is often of inter­est to use these mea­sure­ments to pre­dict some response (eg. like­li­hood of get­ting a dis­ease, or prob­a­bil­ity of click­ing a link). Meth­ods for this prob­lem must bal­ance 3 objec­tives: they must be pre­dic­tive, they must be com­pu­ta­tion­ally tractable, and, in many appli­ca­tions, they must be inter­pretable. Many machine learn­ing meth­ods are highly suc­cess­ful at the first two, but are black boxes. In con­trast, the Lasso (l1 penal­ized regres­sion) is inter­pretable and com­pu­ta­tion­ally tractable, but assumes lin­ear­ity which lim­its its pre­dic­tive accuracy.

In this talk we will dis­cuss a class of sparse addi­tive mod­els induced by com­bin­ing spar­sity and struc­tural penal­ties. These are more flex­i­ble than the stan­dard lin­ear penal­ized model, but main­tain its inter­pretabil­ity and com­pu­ta­tional tractabil­ity. We will show when these penal­ties can and can­not be com­bined to induce the desired struc­ture and spar­sity. We will give an effi­cient algo­rithm for fit­ting a wide class of these mod­els. And we will show that some of these mod­els have desir­able the­o­ret­i­cal properties.

This is joint work with Ash­ley Petersen and Daniela Wit­ten (UW).

Dan Spielman; Spectral Sparsification of Graphs

May 8th, 2014, 3:30pm
EEB 105
Dan Spiel­man, Depart­ment of Com­puter Sci­ence, Yale.
Spec­tral Spar­si­fi­ca­tion of Graphs

Abstract: We intro­duce a notion of what it means for one graph to be a good spec­tral approx­i­ma­tion of another, and prove that every graph can be well-approximated by a graph with few edges.

We ask how well a given graph can be approx­i­mated by a sparse graph. Expander graphs can be viewed as sparse approx­i­ma­tions of com­plete graphs, with Ramanu­jan expanders pro­vid­ing the best pos­si­ble approx­i­ma­tions. We prove that every graph can be approx­i­mated by a sparse graph almost as well as the com­plete graphs are approx­i­mated by the Ramanu­jan expanders: our approx­i­ma­tions employ at most twice as many edges to achieve the same approx­i­ma­tion factor.

We also present an effi­cient ran­dom­ized algo­rithm for con­struct­ing sparse approx­i­ma­tions that only uses a log­a­rith­mic fac­tor more edges than optimal.

Our algo­rithms fol­low from the solu­tion of a prob­lem in lin­ear alge­bra. Given any expres­sion of a rank-n sym­met­ric matrix A as a sum of rank-1 sym­met­ric matri­ces, we show that A can be well approx­i­mated by a weighted sum of only O(n) of those rank-1 matrices.

This is joint work with Joshua Bat­son, Nikhil Sri­vas­tava and Shang-Hua Teng.

Bios­ketch: Daniel Alan Spiel­man received his B.A. in Math­e­mat­ics and Com­puter Sci­ence from Yale in 1992, and his Ph.D in Applied Math­e­mat­ics from M.I.T. in 1995. He spent a year as a NSF Math­e­mat­i­cal Sci­ences Post­doc in the Com­puter Sci­ence Depart­ment at U.C. Berke­ley, and then taught in the Applied Math­e­mat­ics Depart­ment at M.I.T. until 2005. Since 2006, he has been a Pro­fes­sor at Yale Uni­ver­sity. He is presently the Henry Ford II Pro­fes­sor of Com­puter Sci­ence, Math­e­mat­ics, and Applied Mathematics.

He has received many awards, includ­ing the 1995 ACM Doc­toral Dis­ser­ta­tion Award, the 2002 IEEE Infor­ma­tion The­ory Paper Award, the 2008 Godel Prize, the 2009 Fulk­er­son Prize, the 2010 Nevan­linna Prize, an inau­gural Simons Inves­ti­ga­tor Award, and a MacArthur Fel­low­ship. He is a Fel­low of the Asso­ci­a­tion for Com­put­ing Machin­ery and a mem­ber of the Con­necti­cut Acad­emy of Sci­ence and Engi­neer­ing. His main research inter­ests include the design and analy­sis of algo­rithms, graph the­ory, machine learn­ing, error-correcting codes and com­bi­na­to­r­ial sci­en­tific computing.

Rina Foygel; Demixing signals: the geometry of corrupted sensing

April 29, 2014, 4:00pm
EEB 303
Rina Foygel, Depart­ment of Sta­tis­tics, Uni­ver­sity of Chicago.
Demix­ing sig­nals: the geom­e­try of cor­rupted sensing

Abstract: In the com­pressed sens­ing prob­lem, a high-dimensional sig­nal with under­ly­ing struc­ture can often be recov­ered with a small num­ber of lin­ear mea­sure­ments. When mul­ti­ple struc­tured sig­nals are super­im­posed, sim­i­lar tech­niques can be used to try to sep­a­rate the sig­nals at a low sam­ple size. We extend the­o­ret­i­cal work on com­pressed sens­ing to the cor­rupted sens­ing prob­lem, where our mea­sure­ments of a struc­tured sig­nal are cor­rupted in a struc­tured way (such as a sparse set of out­liers in the mea­sure­ments). Con­vex penal­ties placed on the sig­nal and the cor­rup­tion reflect their respec­tive latent struc­ture (e.g. spar­sity, block-wise spar­sity, low rank, etc). The result­ing penal­ized opti­miza­tion prob­lem involves a tun­ing para­me­ter con­trol­ling the trade­off between the two penal­ties. We find that this tun­ing para­me­ter can be cho­sen by com­par­ing geo­met­ric prop­er­ties of the two types of struc­ture, which requires no cross-validation and yields nearly opti­mal results in the­ory and in simulations.

Joint work with Lester Mackey.

Sanjeev Arora; Overcoming Intractability in Unsupervised Learning

April 15th, 2014, 4:00pm
CSE 691 (The Gates Com­mons)
San­jeev Arora, Depart­ment of Com­puter Sci­ence, Prince­ton.
Over­com­ing Intractabil­ity in Unsu­per­vised Learning

Abstract: Unsu­per­vised learn­ing – i.e., learn­ing with unla­beled data – is increas­ingly impor­tant given today’s data del­uge. Most nat­ural prob­lems in this domain – e.g. for mod­els such as mix­ture mod­els, HMMs, graph­i­cal mod­els, topic mod­els and sparse coding/dictionary learn­ing, deep learn­ing – are NP-hard. There­fore researchers in prac­tice use either heuris­tics or con­vex relax­ations with no con­crete approx­i­ma­tion bounds. Sev­eral non­con­vex heuris­tics work well in prac­tice, which is also a mystery.

The talk will describe a sequence of recent results whereby rig­or­ous approaches lead­ing to poly­no­mial run­ning time are pos­si­ble for sev­eral prob­lems in unsu­per­vised learn­ing. The proof of poly­no­mial run­ning time usu­ally relies upon non­de­gen­er­acy assump­tions on the data and the model para­me­ters, and often also on sto­chas­tic prop­er­ties of the data (average-case analy­sis). Some of these new algo­rithms are very effi­cient and prac­ti­cal – e.g. for topic modeling.

Bios­ketch: San­jeev Arora is the Charles C. Fitz­mor­ris Pro­fes­sor of Com­puter Sci­ence at Prince­ton Uni­ver­sity. His research spans The­o­ret­i­cal Com­puter Sci­ence, includ­ing foun­da­tional work in Algo­rithms and Com­pu­ta­tional Com­plex­ity. His recent work focuses on prov­able bounds for Machine Learn­ing. He has received numer­ous awards and dis­tinc­tion. Most recently, these include the 2012 Fulk­er­son Prize, the 2011 ACM-Infosys Foun­da­tion Award in Com­put­ing Sci­ences, the 2010 Goedel Prize, and the Best Paper Award at the 2010 Annual Sym­po­sium on Foun­da­tions of Com­put­ing. He is an ACM Fel­low and recip­i­ent of the 1995 ACM Doc­toral Dis­ser­ta­tion Award.