Winter and Spring 2015 Calendar

Jan­u­ary 13 [CORE]
James Rene­gar, School of Oper­a­tions Research and Infor­ma­tion Engi­neer­ing, Cor­nell.
Extend­ing the Applic­a­bil­ity of Effi­cient First-Order Meth­ods for Con­vex Optimization

Jan­u­ary 15 [CORE, CSE]
Jon Klein­berg, Depart­ments of Com­puter Sci­ence and Infor­ma­tion Sci­ence, Cor­nell.
Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

Jan­u­ary 27
Hon Leung Lee, Depart­ment of Math­e­mat­ics, UW.
Min­i­miz­ing Dis­tance to an Orthog­o­nally Invari­ant Matrix Set

Feb­ru­ary 24
Emily Fox, Depart­ment of Sta­tis­tics, UW.
Lever­ag­ing Opti­miza­tion Tech­niques to Scale Bayesian Inference

Feb­ru­ary 27 [MAC, CORE]
Jean­nette Wing, Cor­po­rate Vice Pres­i­dent of Microsoft Research.
Com­pu­ta­tional Thinking

March 10
Jiashan Wang, Depart­ment of Math­e­mat­ics, UW.
Iter­a­tive Re-weighted Lin­ear Least Squares for Exact Penalty Sub­prob­lems on Prod­uct Sets

April 14
Sebas­t­ian Bubeck, MSR.
TBA

April 28 [CORE]
Joel Tropp, Depart­ment of Com­put­ing and Math­e­mat­i­cal Sci­ences, Cal­tech.
TBA

Click on the title or see below for the abstracts.

Jiashan Wang; Iterative Re-weighted Linear Least Squares for Exact Penalty Subproblems on Product Sets

March 10, 2015, 4:00pm
GUG 204
Jiashan Wang, Depart­ment of Math­e­mat­ics, UW.
Iter­a­tive Re-weighted Lin­ear Least Squares for Exact Penalty Sub­prob­lems on Prod­uct Sets

Abstract: We present two matrix-free meth­ods for solv­ing exact penalty sub­prob­lems on prod­uct sets that arise when solv­ing large-scale opti­miza­tion prob­lems. The first approach is a novel iter­a­tive re-weighting algo­rithm (IRWA), which iter­a­tively min­i­mizes qua­dratic mod­els of relaxed sub­prob­lems while auto­mat­i­cally updat­ing a relax­ation vec­tor. The sec­ond approach is based on alter­nat­ing direc­tion aug­mented Lagrangian (ADAL) tech­nol­ogy applied to our set­ting. The main com­pu­ta­tional costs of each algo­rithm are the repeated min­i­miza­tions of con­vex qua­dratic func­tions which can be per­formed matrix-free. We prove that both algo­rithms are glob­ally con­ver­gent under loose assump­tions, and that each requires at most $O(1/\varepsilon^2)$ iter­a­tions to reach $\varepsilon$-optimality of the objec­tive func­tion. Numer­i­cal exper­i­ments exhibit the abil­ity of both algo­rithms to effi­ciently find inex­act solu­tions. How­ever, in cer­tain cases, these exper­i­ments indi­cate that IRWA can be sig­nif­i­cantly more effi­cient than ADAL.

Joint work with James Burke, Frank Cur­tis, and Hao Wang.

Emily Fox; Leveraging Optimization Techniques to Scale Bayesian Inference

Feb­ru­ary 24, 2015, 4:00pm
GUG 204
Emily Fox, Depart­ment of Sta­tis­tics, UW.
Lever­ag­ing Opti­miza­tion Tech­niques to Scale Bayesian Inference

Abstract: Data streams of increas­ing com­plex­ity are being col­lected in a vari­ety of fields rang­ing from neu­ro­science, genomics, and envi­ron­men­tal mon­i­tor­ing to e-commerce based on tech­nolo­gies and infra­struc­tures pre­vi­ously unavail­able. With the advent of Markov chain Monte Carlo (MCMC) com­bined with the com­pu­ta­tional power to imple­ment such algo­rithms, deploy­ing increas­ingly expres­sive mod­els has been a focus in recent decades. Unfor­tu­nately, tra­di­tional algo­rithms for Bayesian infer­ence in these mod­els such as MCMC and vari­a­tional infer­ence do not typ­i­cally scale to the large datasets encoun­tered in prac­tice. Like­wise, these algo­rithms are not applic­a­ble to the increas­ingly com­mon sit­u­a­tion where an unbounded amount of data arrive as a stream and infer­ences need to be made on-the-fly. In this talk, we will present a series of algo­rithms— sto­chas­tic gra­di­ent Hamil­ton­ian Monte Carlo, HMM sto­chas­tic vari­a­tional infer­ence, and stream­ing Bayesian non­para­met­ric infer­ence— to address var­i­ous aspects of the chal­lenge in scal­ing Bayesian infer­ence; our algo­rithms focus on deploy­ing sto­chas­tic gra­di­ents and work­ing within an opti­miza­tion frame­work. We demon­strate our meth­ods on a vari­ety of appli­ca­tions includ­ing online movie rec­om­men­da­tions, seg­ment­ing a human chro­matin data set with 250 mil­lion obser­va­tions, and clus­ter­ing a stream of New York Times documents.

Joint work with Tianqi Chen, Nick Foti, Dil­lon Laird, Alex Tank, and Jason Xu.

Hon Leung Lee; Minimizing Distance to an Orthogonally Invariant Matrix Set

Jan­u­ary 27, 2015, 4:00pm
GUG 204
Hon Leung Lee, Depart­ments of Math­e­mat­ics, UW.
Min­i­miz­ing Dis­tance to an Orthog­o­nally Invari­ant Matrix Set

Abstract: The prob­lem of find­ing the clos­est point in a set, with respect to Euclid­ean dis­tance, arises fre­quently in appli­ca­tions. In this talk we focus on orthog­o­nally invari­ant matrix sets and pro­vide a frame­work for com­put­ing and count­ing the real smooth crit­i­cal points of this min­i­miza­tion prob­lem, as well as find­ing the min­i­miz­ers. The key results are the “trans­fer prin­ci­ples” that allow cal­cu­la­tions to be done in the Euclid­ean space con­tain­ing the vec­tors of sin­gu­lar val­ues of the mem­bers in the matrix set. Run­ning exam­ples include Eckart-Young the­o­rem and find­ing the near­est essen­tial matrix. This is a joint work with Dmitriy Drusvy­atskiy and Rekha Thomas.

Jon Kleinberg; Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

CORE Series, joint with ​CSE Dis­tin­guished Lec­ture Series, Data Sci­ence Seminar​
Jan­u­ary 15, 2014, 3:30pm
EEB 105
Jon Klein­berg, Depart­ments of Com­puter Sci­ence and Infor­ma­tion Sci­ence, Cor­nell Uni­ver­sity.
Incen­tives for Col­lec­tive Behav­ior: Badges, Pro­cras­ti­na­tion, and Long-Range Goals

Abstract: Many sys­tems involve the allo­ca­tion of rewards for achieve­ments, and these rewards pro­duce a set of incen­tives that in turn guide behav­ior. Such effects are vis­i­ble in many domains from every­day life, and they are increas­ingly form­ing a designed aspect of par­tic­i­pa­tory on-line sites through the use of badges and other reward sys­tems. We con­sider sev­eral aspects of the inter­ac­tion between rewards and incen­tives in the con­text of col­lec­tive effort, includ­ing a method for rea­son­ing about on-line user activ­ity in the pres­ence of badges, and a graph-theoretic frame­work for ana­lyz­ing pro­cras­ti­na­tion and other forms of behav­ior that are incon­sis­tent over time. The talk will be based on joint work with Ash­ton Ander­son, Dan Hut­ten­locher, Jure Leskovec, and Sigal Oren.

Bio: Pro­fes­sor Klein­berg is a pro­fes­sor at Cor­nell Uni­ver­sity. His research focuses on issues at the inter­face of net­works and infor­ma­tion, with an empha­sis on the social and infor­ma­tion net­works that under­pin the Web and other on-line media. His work has been sup­ported by an NSF Career Award, an ONR Young Inves­ti­ga­tor Award, a MacArthur Foun­da­tion Fel­low­ship, a Packard Foun­da­tion Fel­low­ship, a Simons Inves­ti­ga­tor Award, a Sloan Foun­da­tion Fel­low­ship, and grants from Google, Yahoo!, and the NSF. He is a mem­ber of the National Acad­emy of Sci­ences, the National Acad­emy of Engi­neer­ing, and the Amer­i­can Acad­emy of Arts and Sciences.

James Renegar; Extending the Applicability of Efficient First-Order Methods for Convex Optimization

CORE Series
Jan­u­ary 13, 2014, 4:30pm
EEB 125
James Rene­gar, Oper­a­tions Research and Infor­ma­tion Engi­neer­ing, Cor­nell Uni­ver­sity.
Extend­ing the Applic­a­bil­ity of Effi­cient First-Order Meth­ods for Con­vex Optimization

Abstract: The study of first-order meth­ods has largely dom­i­nated research in con­tin­u­ous opti­miza­tion for the last decade, yet still the range of prob­lems for which effi­cient and easily-applicable first-order meth­ods have been devel­oped is sur­pris­ingly lim­ited, even though much has been achieved in some areas with high pro­file, such as com­pressed sensing.

We present a sim­ple trans­for­ma­tion of any lin­ear or semi­def­i­nite (or hyper­bolic) pro­gram into an equiv­a­lent con­vex opti­miza­tion prob­lem whose only con­straints are lin­ear equa­tions. The objec­tive func­tion is defined on the whole space, mak­ing vir­tu­ally all sub­gra­di­ent meth­ods be imme­di­ately applic­a­ble. We observe, more­over, that the objec­tive func­tion is nat­u­rally smoothed, thereby allow­ing most first-order meth­ods to be applied.

We develop com­plex­ity bounds in the unsmoothed case for a par­tic­u­lar sub­gra­di­ent method, and in the smoothed case for Nesterov’s orig­i­nal opti­mal first-order method for smooth func­tions. We achieve the desired bounds on the num­ber of iterations.

Per­haps most sur­pris­ing is that the trans­for­ma­tion is sim­ple and so is the basic the­ory, and yet the approach has been over­looked until now, a blind spot.

Bio: James Rene­gar has been a Pro­fes­sor in the School of Oper­a­tions Research and Infor­ma­tion Engi­neer­ing at Cor­nell Uni­ver­sity since 1987, and has served as its Direc­tor (2004–2009). His research inter­ests lie in con­tin­u­ous opti­miza­tion, and more broadly, in the inter­play between algo­rithms, geom­e­try, and alge­braic tech­niques. Pro­fes­sor Rene­gar has made fun­da­men­tal con­tri­bu­tions to the design and analy­sis of inte­rior point meth­ods, error analy­sis in con­vex opti­miza­tion, and numeric and alge­braic com­pu­ta­tional com­plex­ity. He was one of the five found­ing mem­bers of the Soci­ety for Foun­da­tions of Com­pu­ta­tional Math­e­mat­ics, and has long been on its board of direc­tors. He was the chair of the work­shop com­mit­tee at the last meet­ing of the soci­ety (2014). Pro­fes­sor Rene­gar was a Semi-Plenary speaker at the 17th Inter­na­tional Sym­po­sium on Math­e­mat­i­cal Pro­gram­ming (2000) and an invited speaker at the Inter­na­tional Con­gress of Math­e­mati­cians (1990). In addi­tion, he has pub­lished an influ­en­tial book on the math­e­mat­ics of inte­rior point meth­ods (SIAM 2001) and has received numer­ous teach­ing awards.