Winter 2015 Calendar

Jan­u­ary 13 [CORE]
James Rene­gar, School of Oper­a­tions Research and Infor­ma­tion Engi­neer­ing, Cor­nell.
Extend­ing the Applic­a­bil­ity of Effi­cient First-Order Meth­ods for Con­vex Optimization

Jan­u­ary 15 [CORE, CSE]
Jon Klein­berg, Depart­ments of Com­puter Sci­ence and Infor­ma­tion Sci­ence, Cor­nell.
Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

Jan­u­ary 27
Hon Leung Lee, Depart­ment of Math­e­mat­ics, UW.
Min­i­miz­ing Dis­tance to an Orthog­o­nally Invari­ant Matrix Set

Feb­ru­ary 10
Emily Fox, Depart­ment of Sta­tis­tics, UW.
TBA

Feb­ru­ary 27 [MAC, CORE]
Jean­nette Wing, Cor­po­rate Vice Pres­i­dent of Microsoft Research.
TBA

April 28 [CORE]
Joel Tropp, Depart­ment of Com­put­ing and Math­e­mat­i­cal Sci­ences, Cal­tech.
TBA

Click on the title or see below for the abstracts.

Hon Leung Lee; Minimizing Distance to an Orthogonally Invariant Matrix Set

Jan­u­ary 27, 2015, 4:00pm
GUG 204
Hon Leung Lee, Depart­ments of Math­e­mat­ics, UW.
Min­i­miz­ing Dis­tance to an Orthog­o­nally Invari­ant Matrix Set

Abstract: The prob­lem of find­ing the clos­est point in a set, with respect to Euclid­ean dis­tance, arises fre­quently in appli­ca­tions. In this talk we focus on orthog­o­nally invari­ant matrix sets and pro­vide a frame­work for com­put­ing and count­ing the real smooth crit­i­cal points of this min­i­miza­tion prob­lem, as well as find­ing the min­i­miz­ers. The key results are the “trans­fer prin­ci­ples” that allow cal­cu­la­tions to be done in the Euclid­ean space con­tain­ing the vec­tors of sin­gu­lar val­ues of the mem­bers in the matrix set. Run­ning exam­ples include Eckart-Young the­o­rem and find­ing the near­est essen­tial matrix. This is a joint work with Dmitriy Drusvy­atskiy and Rekha Thomas.

Jon Kleinberg; Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

CORE Series, joint with ​CSE Dis­tin­guished Lec­ture Series, Data Sci­ence Seminar​
Jan­u­ary 15, 2014, 3:30pm
EEB 105
Jon Klein­berg, Depart­ments of Com­puter Sci­ence and Infor­ma­tion Sci­ence, Cor­nell Uni­ver­sity.
Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

Abstract: Many sys­tems involve the allo­ca­tion of rewards for achieve­ments, and these rewards pro­duce a set of incen­tives that in turn guide behav­ior. Such effects are vis­i­ble in many domains from every­day life, and they are increas­ingly form­ing a designed aspect of par­tic­i­pa­tory on-line sites through the use of badges and other reward sys­tems. We con­sider sev­eral aspects of the inter­ac­tion between rewards and incen­tives in the con­text of col­lec­tive effort, includ­ing a method for rea­son­ing about on-line user activ­ity in the pres­ence of badges, and a graph-theoretic frame­work for ana­lyz­ing pro­cras­ti­na­tion and other forms of behav­ior that are incon­sis­tent over time. The talk will be based on joint work with Ash­ton Ander­son, Dan Hut­ten­locher, Jure Leskovec, and Sigal Oren.

Bio: Pro­fes­sor Klein­berg is a pro­fes­sor at Cor­nell Uni­ver­sity. His research focuses on issues at the inter­face of net­works and infor­ma­tion, with an empha­sis on the social and infor­ma­tion net­works that under­pin the Web and other on-line media. His work has been sup­ported by an NSF Career Award, an ONR Young Inves­ti­ga­tor Award, a MacArthur Foun­da­tion Fel­low­ship, a Packard Foun­da­tion Fel­low­ship, a Simons Inves­ti­ga­tor Award, a Sloan Foun­da­tion Fel­low­ship, and grants from Google, Yahoo!, and the NSF. He is a mem­ber of the National Acad­emy of Sci­ences, the National Acad­emy of Engi­neer­ing, and the Amer­i­can Acad­emy of Arts and Sciences.

James Renegar; Extending the Applicability of Efficient First-Order Methods for Convex Optimization

CORE Series
Jan­u­ary 13, 2014, 4:30pm
EEB 125
James Rene­gar, Oper­a­tions Research and Infor­ma­tion Engi­neer­ing, Cor­nell Uni­ver­sity.
Extend­ing the Applic­a­bil­ity of Effi­cient First-Order Meth­ods for Con­vex Optimization

Abstract: The study of first-order meth­ods has largely dom­i­nated research in con­tin­u­ous opti­miza­tion for the last decade, yet still the range of prob­lems for which effi­cient and easily-applicable first-order meth­ods have been devel­oped is sur­pris­ingly lim­ited, even though much has been achieved in some areas with high pro­file, such as com­pressed sensing.

We present a sim­ple trans­for­ma­tion of any lin­ear or semi­def­i­nite (or hyper­bolic) pro­gram into an equiv­a­lent con­vex opti­miza­tion prob­lem whose only con­straints are lin­ear equa­tions. The objec­tive func­tion is defined on the whole space, mak­ing vir­tu­ally all sub­gra­di­ent meth­ods be imme­di­ately applic­a­ble. We observe, more­over, that the objec­tive func­tion is nat­u­rally smoothed, thereby allow­ing most first-order meth­ods to be applied.

We develop com­plex­ity bounds in the unsmoothed case for a par­tic­u­lar sub­gra­di­ent method, and in the smoothed case for Nesterov’s orig­i­nal opti­mal first-order method for smooth func­tions. We achieve the desired bounds on the num­ber of iterations.

Per­haps most sur­pris­ing is that the trans­for­ma­tion is sim­ple and so is the basic the­ory, and yet the approach has been over­looked until now, a blind spot.

Bio: James Rene­gar has been a Pro­fes­sor in the School of Oper­a­tions Research and Infor­ma­tion Engi­neer­ing at Cor­nell Uni­ver­sity since 1987, and has served as its Direc­tor (2004–2009). His research inter­ests lie in con­tin­u­ous opti­miza­tion, and more broadly, in the inter­play between algo­rithms, geom­e­try, and alge­braic tech­niques. Pro­fes­sor Rene­gar has made fun­da­men­tal con­tri­bu­tions to the design and analy­sis of inte­rior point meth­ods, error analy­sis in con­vex opti­miza­tion, and numeric and alge­braic com­pu­ta­tional com­plex­ity. He was one of the five found­ing mem­bers of the Soci­ety for Foun­da­tions of Com­pu­ta­tional Math­e­mat­ics, and has long been on its board of direc­tors. He was the chair of the work­shop com­mit­tee at the last meet­ing of the soci­ety (2014). Pro­fes­sor Rene­gar was a Semi-Plenary speaker at the 17th Inter­na­tional Sym­po­sium on Math­e­mat­i­cal Pro­gram­ming (2000) and an invited speaker at the Inter­na­tional Con­gress of Math­e­mati­cians (1990). In addi­tion, he has pub­lished an influ­en­tial book on the math­e­mat­ics of inte­rior point meth­ods (SIAM 2001) and has received numer­ous teach­ing awards.

Henry Wolkowicz; Taking Advantage of Degeneracy in Cone Optimization: with Applications to Sensor Network Localization

Novem­ber 13, 2014, 4:00pm
EEB 003
Henry Wolkow­icz, Depart­ment of Math­e­mat­ics, Uni­ver­sity of Water­loo.
Tak­ing Advan­tage of Degen­er­acy in Cone Opti­miza­tion: with Appli­ca­tions to Sen­sor Net­work Localization

Abstract: The ele­gant the­o­ret­i­cal results for strong dual­ity and strict com­ple­men­tar­ity for lin­ear pro­gram­ming, LP, lie behind the suc­cess of cur­rent algo­rithms. How­ever, the the­ory and pre­pro­cess­ing tech­niques that are suc­cess­ful for LP can fail for cone pro­gram­ming over non­poly­he­dral cones.

Sur­pris­ingly, many impor­tant appli­ca­tions of semi­def­i­nite pro­gram­ming, SDP, that arise from relax­ations of hard com­bi­na­to­r­ial prob­lems are degen­er­ate. (Slater’s con­straint qual­i­fi­ca­tion fails.) This includes relax­ations for prob­lems such as the: Qua­dratic Assign­ment; Graph Par­ti­tion­ing; Set Cov­er­ing and par­ti­tion­ing; and sen­sor net­work local­iza­tion and mol­e­c­u­lar con­for­ma­tion. Rather than being a dis­ad­van­tage, we show that this degen­er­acy can be exploited. In par­tic­u­lar, sev­eral huge instances of SDP com­ple­tion prob­lems can be solved quickly and to extremely high accu­racy. In par­tic­u­lar, we illus­trate this on the sen­sor net­work local­iza­tion problem.

Peter Bürgisser; Condition of Convex Optimization and Spherical Intrinsic Volumes

CORE Series
Decem­ber 2, 2014, 4:00pm
EE 125
Peter Bür­gisser, Insti­tute for Math­e­mat­ics, TU Berlin.
Con­di­tion of Con­vex Opti­miza­tion and Spher­i­cal Intrin­sic Volumes

Abstract: The analy­sis of the sta­bil­ity and effi­ciency of algo­rithms for con­vex opti­miza­tion nat­u­rally leads to the study of con­di­tion num­bers. The Grass­mann con­di­tion, which is a geo­met­ric ver­sion of Renegar’s con­di­tion, is espe­cially suited for a prob­a­bilis­tic analy­sis. Such analy­sis can be per­formed by rely­ing on tech­niques from spher­i­cal con­vex geom­e­try and dif­fer­en­tial geom­e­try. Along this way, we obtain an aver­age analy­sis of the Grass­mann con­di­tion num­ber that holds for any reg­u­lar con­vex cone. A closer look prompts the inves­ti­ga­tion of the spher­i­cal coun­ter­parts of intrin­sic vol­umes — a notion thor­oughly stud­ied for euclid­ean spaces, but much less so for spheres, so that many fas­ci­nat­ing ques­tions remain.
Joint work with Den­nis Amelunxen.

Peter Bürgisser

Bio: Peter Bür­gisser has been a pro­fes­sor of Math­e­mat­ics at the Tech­ni­cal Uni­ver­sity of Berlin since 2013. Prior to that he was a pro­fes­sor of Math­e­mat­ics at the Uni­ver­sity of Pader­born. His research inter­ests are alge­braic com­plex­ity the­ory, sym­bolic and numeric com­pu­ta­tion, and more recently, the prob­a­bilis­tic analy­sis of numer­i­cal algo­rithms. Bür­gisser was an invited speaker at the Inter­na­tional Con­gress of Math­e­mati­cians in Hyder­abad in 2010 and ple­nary speaker at Foun­da­tions of Com­pu­ta­tional Math­e­mat­ics 2008 in Hong Kong. He is an asso­ciate edi­tor of Com­pu­ta­tional Com­plex­ity, and an edi­tor of the jour­nal Foun­da­tions of Com­pu­ta­tional Math­e­mat­ics. He served as a work­shop co-organizer for the Foun­da­tions of Com­pu­ta­tional Math­e­mat­ics con­fer­ences (2005, 2008, 2011), for the Spe­cial Year on Appli­ca­tions of Alge­braic Geom­e­try (IMA 2006/2007), and for the Ober­wol­fach work­shops on Com­plex­ity The­ory (2009, 2012). His research inter­ests lie in alge­braic com­plex­ity the­ory: both the design of effi­cient algo­rithms for alge­braic prob­lems, and the quest for lower bounds.

Daniela Witten; Flexible Graphical Modeling

Novem­ber 18, 2014, 4:00pm
GUG 204
Daniela Wit­ten, Depart­ments of Bio­sta­tis­tics and Sta­tis­tics, UW.
Flex­i­ble Graph­i­cal Modeling

Abstract: In recent years, there has been con­sid­er­able inter­est in esti­mat­ing con­di­tional depen­dence rela­tion­ships among ran­dom vari­ables in the high-dimensional set­ting, in which the num­ber of vari­ables far exceeds the num­ber of avail­able obser­va­tions. Most prior work has assumed that the vari­ables are mul­ti­vari­ate Gauss­ian, or that the con­di­tional means of the vari­ables are lin­ear. Unfor­tu­nately, if these assump­tions are vio­lated, then the result­ing esti­mates can be inaccurate.

I will present two recent lines of work on learn­ing the con­di­tional depen­dence graph of a set of ran­dom vari­ables in the non-Gaussian set­ting. First, I will present a semi-parametric method that allows the con­di­tional means of the fea­tures to take on an arbi­trary addi­tive form. Next, I will present an approach for learn­ing a graph in which the dis­tri­b­u­tion of each node, con­di­tioned on the oth­ers, may have a dif­fer­ent para­met­ric form. Each approach is for­mu­lated as the solu­tion to a con­vex opti­miza­tion prob­lem cor­re­spond­ing to a penal­ized log likelihood.

This is joint work with Ali Sho­jaie, Shizhe Chen, and Arend Voorman.