Spring 2015 Calendar

March 30 [CORE]
Andrew Fitzgib­bon, Microsoft Research, Cam­bridge.
Learn­ing about Shape

April 14
Andreas Griewank, Insti­tut für Math­e­matik, Hum­boldt Uni­ver­sity of Berlin.
Lip­schitz­ian Piece­wise Smooth Optimization

April 28 [CORE]
Joel Tropp, Depart­ment of Com­put­ing and Math­e­mat­i­cal Sci­ences, Cal­tech.
Applied Ran­dom Matrix Theory

May 12
Sebas­t­ian Bubeck, MSR.
The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

Click on the title or see below for the abstracts.

Joel Tropp; Applied Random Matrix Theory

CORE Series
Tues­day, April 28, 2015, 4pm
EEB 125
Joel Tropp, Cal­tech.
Applied Ran­dom Matrix Theory

Abstract: Ran­dom matri­ces now play a role in many areas of the­o­ret­i­cal, applied, and com­pu­ta­tional math­e­mat­ics. There­fore, it is desir­able to have tools for study­ing ran­dom matri­ces that are flex­i­ble, easy to use, and pow­er­ful. Over the last fif­teen years, researchers have devel­oped a remark­able fam­ily of results, called matrix con­cen­tra­tion inequal­i­ties, that bal­ance these criteria.

This talk offers an invi­ta­tion to the field of matrix con­cen­tra­tion inequal­i­ties. The pre­sen­ta­tion begins with some his­tory of ran­dom matrix the­ory, and it intro­duces an impor­tant prob­a­bil­ity inequal­ity for scalar ran­dom vari­ables. It describes a flex­i­ble model for ran­dom matri­ces that is suit­able for many prob­lems, and it dis­cusses one of the most impor­tant matrix con­cen­tra­tion results, the matrix Bern­stein inequal­ity. The talk con­cludes with some appli­ca­tions drawn from algo­rithms, com­bi­na­torics, sta­tis­tics, sig­nal pro­cess­ing, sci­en­tific com­put­ing, and beyond.

Bio: Joel A. Tropp is Pro­fes­sor of Applied & Com­pu­ta­tional Math­e­mat­ics at the Cal­i­for­nia Insti­tute of Tech­nol­ogy. He earned his PhD degree in Com­pu­ta­tional Applied Math­e­mat­ics from the Uni­ver­sity of Texas at Austin in 2004. Dr. Tropp’s work lies at the inter­face of applied math­e­mat­ics, elec­tri­cal engi­neer­ing, com­puter sci­ence, and sta­tis­tics. This research con­cerns the the­o­ret­i­cal and com­pu­ta­tional aspects of data analy­sis, sparse mod­el­ing, ran­dom­ized lin­ear alge­bra, and ran­dom matrix the­ory. Dr. Tropp has received sev­eral major awards for young researchers, includ­ing the 2007 ONR Young Inves­ti­ga­tor Award and the 2008 Pres­i­den­tial Early Career Award for Sci­en­tists and Engi­neers. He is also the win­ner of the 6th Vasil A. Popov prize and the 2011 Mon­roe H. Mar­tin prize.

Andreas Griewank; Lipschitzian Piecewise Smooth Optimization

April 14, 2015, 4:00pm
GUG 220
Andreas Griewank, Insti­tut für Math­e­matik, Hum­boldt Uni­ver­sity of Berlin.
Lip­schitz­ian Piece­wise Smooth Optimization

Abstract: We address the prob­lem of min­i­miz­ing objec­tives from the class of piece­wise dif­fer­en­tiable func­tions whose non­smooth­ness can be encap­su­lated in the absolute value func­tion. They pos­sess local piece­wise lin­ear approx­i­ma­tions with a dis­crep­ancy that can be bounded by a qua­dratic prox­i­mal term. This over­es­ti­mat­ing local model is con­tin­u­ous but gen­er­ally non­con­vex. It can be gen­er­ated in its {\em abs-normal-form} by a minor exten­sion of stan­dard algo­rith­mic dif­fer­en­ti­a­tion tools. Here we demon­strate how the local model can be min­i­mized by a bun­dle type method, which ben­e­fits from the avail­abil­ity of addi­tional {\em gray-box-information} via the abs-normal form. In the con­vex case our algo­rithms real­izes the con­sis­tent steep­est descent tra­jec­tory for which finite con­ver­gence was estab­lished by Hirri­art Urruty and Claude Lemarechal, specif­i­cally cov­er­ing counter exam­ples where steep­est descent with exact line-search famously fails.

Sebastien Bubeck; The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

May 12, 2015, 4pm
GUG 220
Sebastien Bubeck, Microsoft Research, Red­mond.
The Entropic Bar­rier: A New and Opti­mal Uni­ver­sal Self-concordant Barrier

Abstract: A fun­da­men­tal result in the the­ory of Inte­rior Point Meth­ods is Nes­terov and Nemirovski’s con­struc­tion of a uni­ver­sal self-concordant bar­rier. In this talk I will intro­duce the entropic bar­rier, a new (and in some sense opti­mal) uni­ver­sal self-concordant bar­rier. The entropic bar­rier con­nects many top­ics of inter­est in Machine Learn­ing: expo­nen­tial fam­i­lies, con­vex dual­ity, log-concave dis­tri­b­u­tions, Mir­ror Descent, and expo­nen­tial weights.

Andrew Fitzgibbon; Learning about Shape

CORE Series
Mon­day, March 30, 2015, 4pm
EEB 125
Andrew Fitzgib­bon, Microsoft Research, Cam­bridge.
Learn­ing about Shape

Abstract: Vision is nat­u­rally con­cerned with shape. If we could recover a sta­ble and com­pact rep­re­sen­ta­tion of object shape from images, we would hope it might aid with numer­ous vision tasks. Just the sil­hou­ette of an object is a strong cue to its iden­tity, and the sil­hou­ette is gen­er­ated by its 3D shape. In com­puter vision, many rep­re­sen­ta­tions have been explored: col­lec­tions of points, “sim­ple” shapes like ellip­soids or poly­he­dra, alge­braic sur­faces and other implicit sur­faces, gen­er­al­ized cylin­ders and rib­bons, and piece­wise (ratio­nal) poly­no­mial rep­re­sen­ta­tions like NURBS and sub­di­vi­sion sur­faces. Many of these can be embed­ded more or less straight­for­wardly into prob­a­bilis­tic shape spaces, and recov­ery (a.k.a. “learn­ing”) of one such space is the goal of the exper­i­men­tal part of this talk.

When recov­er­ing shape from mea­sure­ments, there is at first sight a nat­ural hier­ar­chy of sta­bil­ity: straight lines can rep­re­sent very lit­tle but may be robustly recov­ered from data, then come conic sec­tions, splines with fixed knots, and gen­eral piece­wise rep­re­sen­ta­tions. I will show, how­ever, that one can pass almost imme­di­ately to piece­wise rep­re­sen­ta­tions with­out loss of robust­ness. In par­tic­u­lar, I shall show how a pop­u­lar rep­re­sen­ta­tion in com­puter graphics—subdivision curves and surfaces—may read­ily be fit to a vari­ety of image data using the tech­nique for ellipse fit­ting intro­duced by Gan­der, Golub, and Strebel in 1994. I show how we can address the previously-difficult prob­lem of recov­er­ing 3D shape from mul­ti­ple sil­hou­ettes, and the con­sid­er­ably harder prob­lem which arises when the sil­hou­ettes are not from the same object instance, but from mem­bers of an object class, for exam­ple 30 images of dif­fer­ent dol­phins each in dif­fer­ent poses. This requires that we simul­ta­ne­ously learn the shape space and the pro­jec­tions from each instance into its image. This simul­ta­ne­ous opti­miza­tion is rem­i­nis­cent of the bun­dle adjust­ment prob­lem in com­puter vision, and indeed our most recent appli­ca­tion, to track­ing the human hand, makes good use of the Ceres Solver.

Joint work with Tom Cash­man and others.

Bio: Andrew Fitzgib­bon is a prin­ci­pal researcher in the com­puter vision group at Microsoft Research Cam­bridge. He is best known for his work on 3D vision, hav­ing been a core con­trib­u­tor to the Emmy-award-winning 3D cam­era tracker “bou­jou” (www.boujou.com) and to the human body track­ing of Kinect for Xbox 360, but his inter­ests are broad, span­ning com­puter vision, graph­ics, machine learn­ing, and even a lit­tle neu­ro­science. He has pub­lished numer­ous highly-cited papers, and received many awards for his work, includ­ing 9 con­fer­ence prizes for best paper or runner-up, the Sil­ver medal of the Royal Acad­emy of Engi­neer­ing, and the British Com­puter Society’s Roger Need­ham award. He is a fel­low of the Royal Acad­emy of Engi­neer­ing, the British Com­puter Soci­ety, and the Inter­na­tional Asso­ci­a­tion for Pat­tern Recog­ni­tion. He has been pro­gram chair of CVPR and ECCV. Before join­ing Microsoft in 2005, he was a Royal Soci­ety Uni­ver­sity Research Fel­low at Oxford Uni­ver­sity, hav­ing pre­vi­ously stud­ied at Edin­burgh Uni­ver­sity, Heriot-Watt Uni­ver­sity, and Uni­ver­sity Col­lege, Cork.

Jiashan Wang; Iterative Re-weighted Linear Least Squares for Exact Penalty Subproblems on Product Sets

March 10, 2015, 4:00pm
GUG 204
Jiashan Wang, Depart­ment of Math­e­mat­ics, UW.
Iter­a­tive Re-weighted Lin­ear Least Squares for Exact Penalty Sub­prob­lems on Prod­uct Sets

Abstract: We present two matrix-free meth­ods for solv­ing exact penalty sub­prob­lems on prod­uct sets that arise when solv­ing large-scale opti­miza­tion prob­lems. The first approach is a novel iter­a­tive re-weighting algo­rithm (IRWA), which iter­a­tively min­i­mizes qua­dratic mod­els of relaxed sub­prob­lems while auto­mat­i­cally updat­ing a relax­ation vec­tor. The sec­ond approach is based on alter­nat­ing direc­tion aug­mented Lagrangian (ADAL) tech­nol­ogy applied to our set­ting. The main com­pu­ta­tional costs of each algo­rithm are the repeated min­i­miza­tions of con­vex qua­dratic func­tions which can be per­formed matrix-free. We prove that both algo­rithms are glob­ally con­ver­gent under loose assump­tions, and that each requires at most $O(1/\varepsilon^2)$ iter­a­tions to reach $\varepsilon$-optimality of the objec­tive func­tion. Numer­i­cal exper­i­ments exhibit the abil­ity of both algo­rithms to effi­ciently find inex­act solu­tions. How­ever, in cer­tain cases, these exper­i­ments indi­cate that IRWA can be sig­nif­i­cantly more effi­cient than ADAL.

Joint work with James Burke, Frank Cur­tis, and Hao Wang.