Spring 2015 Calendar

March 30 [CORE]
Andrew Fitzgib­bon, Microsoft Research, Cam­bridge.
Learn­ing about Shape

April 14
Andreas Griewank, Insti­tut für Math­e­matik, Hum­boldt Uni­ver­sity of Berlin.
Lip­schitz­ian Piece­wise Smooth Optimization

April 28 [CORE]
Joel Tropp, Depart­ment of Com­put­ing and Math­e­mat­i­cal Sci­ences, Cal­tech.
Applied Ran­dom Matrix Theory

May 5
Frank Per­me­nter, EECS Depart­ment, MIT.
Par­tial facial reduc­tion: sim­pli­fied, equiv­a­lent SDPs via approx­i­ma­tions of the PSD cone

May 12
Sebas­t­ian Bubeck, MSR.
The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

May 19 [CORE]
Jonathan Kel­ner, MIT.
Bridg­ing the Numer­i­cal and the Com­bi­na­to­r­ial: Emerg­ing Tools, Tech­niques, and Design Prin­ci­ples for Graph Algorithms

Click on the title or see below for the abstracts.

Jonathan Kelner; Bridging the Numerical and the Combinatorial: Emerging Tools, Techniques, and Design Principles for Graph Algorithms

CORE Series
May 19, 2015, 4:00pm
EEB 125
Jonathan Kel­ner, Depart­ment of Math­e­mat­ics, MIT.
Bridg­ing the Numer­i­cal and the Com­bi­na­to­r­ial: Emerg­ing Tools, Tech­niques, and Design Prin­ci­ples for Graph Algorithms

Abstract: Flow and cut prob­lems on graphs are among the most fun­da­men­tal and exten­sively stud­ied prob­lems in Oper­a­tions Research and Opti­miza­tion, play­ing a foun­da­tional role in both the the­ory and prac­tice of algo­rithm design. While the clas­si­cal algo­rithms for these prob­lems were largely based on com­bi­na­to­r­ial approaches, the past sev­eral years have wit­nessed the emer­gence of a pow­er­ful col­lec­tion of new tech­niques based on geom­e­try, spec­tral graph the­ory, com­pu­ta­tional lin­ear alge­bra, ran­dom­iza­tion, and iter­a­tive meth­ods from con­tin­u­ous opti­miza­tion. These numer­i­cal tech­niques have allowed researchers to pro­vide bet­ter prov­able algo­rithms for a wide range of graph prob­lems, in some cases break­ing algo­rith­mic bar­ri­ers that had stood for sev­eral decades.

The rela­tion­ship between graph the­ory and numer­i­cal analy­sis has proven fruit­ful in the other direc­tion as well. In addi­tion to pro­vid­ing numer­i­cal tech­niques for graph algo­rithms, it has given rise to new com­bi­na­to­r­ial tech­niques for com­pu­ta­tional lin­ear alge­bra. In par­tic­u­lar, it has led to fast algo­rithms for Lapla­cian lin­ear sys­tems, which have broad appli­ca­tions in numer­i­cal sci­en­tific com­put­ing, machine learn­ing, oper­a­tions research, com­puter vision, and net­work analy­sis, among others.

In this talk, I dis­cuss some of the recur­ring themes that arise in this con­flu­ence of fields. I will apply these to sketch algo­rithms that run in close to lin­ear time for two basic algo­rith­mic prob­lems: solv­ing Lapla­cian lin­ear sys­tems and find­ing approx­i­mately max­i­mum flows on undi­rected graphs.

The talk will be based on joint work with Yin Tat Lee, Aaron Sid­ford, Lorenzo Orec­chia, and Zeyuan Zhu.

Bio: Jonathan Kel­ner is an Asso­ciate Pro­fes­sor of Applied Math­e­mat­ics in the MIT Depart­ment of Math­e­mat­ics and a mem­ber of the MIT Com­puter Sci­ence and Arti­fi­cial Intel­li­gence Lab­o­ra­tory (CSAIL). His research focuses on the appli­ca­tion of tech­niques from pure math­e­mat­ics to the solu­tion of fun­da­men­tal prob­lems in algo­rithms and com­plex­ity the­ory. He was an under­grad­u­ate at Har­vard Uni­ver­sity, and he received his Ph.D. in Com­puter Sci­ence from MIT in 2006. Before join­ing the MIT fac­ulty, he spent a year as a mem­ber of the Insti­tute for Advanced Study. He has received a vari­ety of awards for his work, includ­ing an NSF CAREER Award, an Alfred P. Sloan Research Fel­low­ship, the Best Paper Awards at STOC 2011, SIGMETRICS 2011, and SODA 2014, the Harold E. Edger­ton Fac­ulty Achieve­ment Award, and the MIT School of Sci­ence Teach­ing Prize.

Frank Permenter; Partial facial reduction: simplified, equivalent SDPs via approximations of the PSD cone

May 5, 2015, 2:00pm
THO 119
Frank Per­me­nter, EECS Depart­ment, MIT.
Par­tial facial reduc­tion: sim­pli­fied, equiv­a­lent SDPs via approx­i­ma­tions of the PSD cone

Abstract: Semi­def­i­nite pro­grams (SDPs) aris­ing in prac­tice fre­quently have no inte­rior and hence can be refor­mu­lated as smaller prob­lems more eas­ily solved. Unfor­tu­nately, find­ing these refor­mu­la­tions requires exe­cut­ing a facial reduc­tion pro­ce­dure, which is typ­i­cally com­pu­ta­tion­ally expen­sive. In this talk, we develop a prac­ti­cal facial reduc­tion pro­ce­dure based on com­pu­ta­tion­ally effi­cient approx­i­ma­tions of the pos­i­tive semi­def­i­nite cone. The pro­posed method sim­pli­fies SDPs with no inte­rior by solv­ing a sequence of eas­ier opti­miza­tion prob­lems (e.g. lin­ear pro­grams) and could be a use­ful pre-processing step for SDP solvers. We demon­strate effec­tive­ness of the method on SDPs aris­ing in prac­tice, and describe our publicly-available soft­ware imple­men­ta­tion. We also give a post-processing pro­ce­dure for dual solu­tion recov­ery that gen­er­ally applies to facial-reduction-based pre-processing tech­niques. Joint work with Pablo Parrilo.

Joel Tropp; Applied Random Matrix Theory

CORE Series
Tues­day, April 28, 2015, 4pm
EEB 125
Joel Tropp, Cal­tech.
Applied Ran­dom Matrix Theory

Abstract: Ran­dom matri­ces now play a role in many areas of the­o­ret­i­cal, applied, and com­pu­ta­tional math­e­mat­ics. There­fore, it is desir­able to have tools for study­ing ran­dom matri­ces that are flex­i­ble, easy to use, and pow­er­ful. Over the last fif­teen years, researchers have devel­oped a remark­able fam­ily of results, called matrix con­cen­tra­tion inequal­i­ties, that bal­ance these criteria.

This talk offers an invi­ta­tion to the field of matrix con­cen­tra­tion inequal­i­ties. The pre­sen­ta­tion begins with some his­tory of ran­dom matrix the­ory, and it intro­duces an impor­tant prob­a­bil­ity inequal­ity for scalar ran­dom vari­ables. It describes a flex­i­ble model for ran­dom matri­ces that is suit­able for many prob­lems, and it dis­cusses one of the most impor­tant matrix con­cen­tra­tion results, the matrix Bern­stein inequal­ity. The talk con­cludes with some appli­ca­tions drawn from algo­rithms, com­bi­na­torics, sta­tis­tics, sig­nal pro­cess­ing, sci­en­tific com­put­ing, and beyond.

Bio: Joel A. Tropp is Pro­fes­sor of Applied & Com­pu­ta­tional Math­e­mat­ics at the Cal­i­for­nia Insti­tute of Tech­nol­ogy. He earned his PhD degree in Com­pu­ta­tional Applied Math­e­mat­ics from the Uni­ver­sity of Texas at Austin in 2004. Dr. Tropp’s work lies at the inter­face of applied math­e­mat­ics, elec­tri­cal engi­neer­ing, com­puter sci­ence, and sta­tis­tics. This research con­cerns the the­o­ret­i­cal and com­pu­ta­tional aspects of data analy­sis, sparse mod­el­ing, ran­dom­ized lin­ear alge­bra, and ran­dom matrix the­ory. Dr. Tropp has received sev­eral major awards for young researchers, includ­ing the 2007 ONR Young Inves­ti­ga­tor Award and the 2008 Pres­i­den­tial Early Career Award for Sci­en­tists and Engi­neers. He is also the win­ner of the 6th Vasil A. Popov prize and the 2011 Mon­roe H. Mar­tin prize.

Andreas Griewank; Lipschitzian Piecewise Smooth Optimization

April 14, 2015, 4:00pm
GUG 220
Andreas Griewank, Insti­tut für Math­e­matik, Hum­boldt Uni­ver­sity of Berlin.
Lip­schitz­ian Piece­wise Smooth Optimization

Abstract: We address the prob­lem of min­i­miz­ing objec­tives from the class of piece­wise dif­fer­en­tiable func­tions whose non­smooth­ness can be encap­su­lated in the absolute value func­tion. They pos­sess local piece­wise lin­ear approx­i­ma­tions with a dis­crep­ancy that can be bounded by a qua­dratic prox­i­mal term. This over­es­ti­mat­ing local model is con­tin­u­ous but gen­er­ally non­con­vex. It can be gen­er­ated in its {\em abs-normal-form} by a minor exten­sion of stan­dard algo­rith­mic dif­fer­en­ti­a­tion tools. Here we demon­strate how the local model can be min­i­mized by a bun­dle type method, which ben­e­fits from the avail­abil­ity of addi­tional {\em gray-box-information} via the abs-normal form. In the con­vex case our algo­rithms real­izes the con­sis­tent steep­est descent tra­jec­tory for which finite con­ver­gence was estab­lished by Hirri­art Urruty and Claude Lemarechal, specif­i­cally cov­er­ing counter exam­ples where steep­est descent with exact line-search famously fails.

Sebastien Bubeck; The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

May 12, 2015, 4pm
GUG 220
Sebastien Bubeck, Microsoft Research, Red­mond.
The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

Abstract: A fun­da­men­tal result in the the­ory of Inte­rior Point Meth­ods is Nes­terov and Nemirovski’s con­struc­tion of a uni­ver­sal self-concordant bar­rier. In this talk I will intro­duce the entropic bar­rier, a new (and in some sense opti­mal) uni­ver­sal self-concordant bar­rier. The entropic bar­rier con­nects many top­ics of inter­est in Machine Learn­ing: expo­nen­tial fam­i­lies, con­vex dual­ity, log-concave dis­tri­b­u­tions, Mir­ror Descent, and expo­nen­tial weights.

Andrew Fitzgibbon; Learning about Shape

CORE Series
Mon­day, March 30, 2015, 4pm
EEB 125
Andrew Fitzgib­bon, Microsoft Research, Cam­bridge.
Learn­ing about Shape

Abstract: Vision is nat­u­rally con­cerned with shape. If we could recover a sta­ble and com­pact rep­re­sen­ta­tion of object shape from images, we would hope it might aid with numer­ous vision tasks. Just the sil­hou­ette of an object is a strong cue to its iden­tity, and the sil­hou­ette is gen­er­ated by its 3D shape. In com­puter vision, many rep­re­sen­ta­tions have been explored: col­lec­tions of points, “sim­ple” shapes like ellip­soids or poly­he­dra, alge­braic sur­faces and other implicit sur­faces, gen­er­al­ized cylin­ders and rib­bons, and piece­wise (ratio­nal) poly­no­mial rep­re­sen­ta­tions like NURBS and sub­di­vi­sion sur­faces. Many of these can be embed­ded more or less straight­for­wardly into prob­a­bilis­tic shape spaces, and recov­ery (a.k.a. “learn­ing”) of one such space is the goal of the exper­i­men­tal part of this talk.

When recov­er­ing shape from mea­sure­ments, there is at first sight a nat­ural hier­ar­chy of sta­bil­ity: straight lines can rep­re­sent very lit­tle but may be robustly recov­ered from data, then come conic sec­tions, splines with fixed knots, and gen­eral piece­wise rep­re­sen­ta­tions. I will show, how­ever, that one can pass almost imme­di­ately to piece­wise rep­re­sen­ta­tions with­out loss of robust­ness. In par­tic­u­lar, I shall show how a pop­u­lar rep­re­sen­ta­tion in com­puter graphics—subdivision curves and surfaces—may read­ily be fit to a vari­ety of image data using the tech­nique for ellipse fit­ting intro­duced by Gan­der, Golub, and Strebel in 1994. I show how we can address the previously-difficult prob­lem of recov­er­ing 3D shape from mul­ti­ple sil­hou­ettes, and the con­sid­er­ably harder prob­lem which arises when the sil­hou­ettes are not from the same object instance, but from mem­bers of an object class, for exam­ple 30 images of dif­fer­ent dol­phins each in dif­fer­ent poses. This requires that we simul­ta­ne­ously learn the shape space and the pro­jec­tions from each instance into its image. This simul­ta­ne­ous opti­miza­tion is rem­i­nis­cent of the bun­dle adjust­ment prob­lem in com­puter vision, and indeed our most recent appli­ca­tion, to track­ing the human hand, makes good use of the Ceres Solver.

Joint work with Tom Cash­man and others.

Bio: Andrew Fitzgib­bon is a prin­ci­pal researcher in the com­puter vision group at Microsoft Research Cam­bridge. He is best known for his work on 3D vision, hav­ing been a core con­trib­u­tor to the Emmy-award-winning 3D cam­era tracker “bou­jou” (www.boujou.com) and to the human body track­ing of Kinect for Xbox 360, but his inter­ests are broad, span­ning com­puter vision, graph­ics, machine learn­ing, and even a lit­tle neu­ro­science. He has pub­lished numer­ous highly-cited papers, and received many awards for his work, includ­ing 9 con­fer­ence prizes for best paper or runner-up, the Sil­ver medal of the Royal Acad­emy of Engi­neer­ing, and the British Com­puter Society’s Roger Need­ham award. He is a fel­low of the Royal Acad­emy of Engi­neer­ing, the British Com­puter Soci­ety, and the Inter­na­tional Asso­ci­a­tion for Pat­tern Recog­ni­tion. He has been pro­gram chair of CVPR and ECCV. Before join­ing Microsoft in 2005, he was a Royal Soci­ety Uni­ver­sity Research Fel­low at Oxford Uni­ver­sity, hav­ing pre­vi­ously stud­ied at Edin­burgh Uni­ver­sity, Heriot-Watt Uni­ver­sity, and Uni­ver­sity Col­lege, Cork.