Category Archives: Uncategorized

Tristan van Leeuwen; A penalty method for PDE-constrained optimization in inverse problems

Feb 23, 2016, 4pm
SIG 226
Tristan van Leeuwen,, Utrecht University

Abstract: Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal, then, is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the paramaters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method reduces some of the non-linearity of the problem and is less sensitive the initial iterate.

Jane Ye; On solving bilevel programming problems

Feb 9, 2016, 4pm
SIG 226
Jane Ye,, University of Victoria, Canada.

Abstract:A bilevel program is a sequence of two optimization problems where the constraint region of the upper level problem is determined implicitly by the solution set to the lower level problem. In this talk we report some recent progresses on solving bilevel programming problems. In the case where all functions are polynomials, we propose a numerical method for globally solving the polynomial bilevel program. For the simple bilevel program where the lower level constraints are independent of the upper level variables we propose a numerical method for finding a stationary point.

Winter 2016 Calendar

January 12
James Davis, University of Illinois at Urbana-Champaign
Customer Choice Models and Assortment Optimization

January 19
João Gouveia, Departament of Mathematics, University of Coimbra
Positive semidefinite rank and representations of polytopes

February 9
Jane Ye, Department of Mathematics and Statistics
University of Victoria

On solving bilevel programming problems

February 23
Tristan van Leeuwen, Utrecht University
A penalty method for PDE-constrained optimization in inverse problems

March 8 [core]
Katya Scheinberg, Lehigh University
Using randomized models in black-box, derivative free and stochastic optimization

James Davis; Customer Choice Models and Assortment Optimization

Tuesday, January 12, 2016, 4:00 pm
SIG 226

James Davis , University of Illinois at Urbana-Champaign .
Customer Choice Models and Assortment Optimization

Abstract: In this talk we will focus on the assortment optimization problem. In this problem there is a set of products, each of which gives some fixed revenue when it is purchased by a customer. There is also a set of customers who’s purchasing behavior is influenced by what products they view. Our objective is to select some subset of products to display to customers in order to maximize expected revenue. The key component in this problem is customer purchasing behavior which we describe with a customer choice model. We will introduce a variety of customer choice models, each of which presents unique challenges. Under one of these models, the multinomial logit model, we will solve variants of the assortment optimization problem.

João Gouveia; Positive semidefinite rank and representations of polytopes

Tuesday, January 19, 2016, 4:00 pm
SIG 226

João Gouveia, University of Coimbra .

Let M be a p-by-q matrix with nonnegative entries. The positive semidefinite rank (psd rank) of M is the smallest integer k for which there exist positive semidefinite matrices A_i,B_j of size k×k such that M_{ij}=trace(A_i,B_j). The psd rank was recently introduced, and has has many appealing interpretations, capturing some geometrical aspects of the power and limitations of semidefinite programming. In this talk, we will briefly cover basic properties and open questions on this quantity, and proceed to use some of this results to provide a complete characterization of polytopes in R^4 that can be represented as economically as possible by means of a semidefinite program.

Victor Falgas-Ravry; Finding large full subgraphs

Tuesday, December 8, 2015, 4:00 pm
GUG 204

Victor Falgas-Ravry, University of Washington .
Finding large full subgraphs

Abstract: A graph is a pair G=(V,E), where V is a set of vertices and E is a set of pairs from V forming the edges of G. Suppose G is a graph on n vertices in which a proportion p of all possible edges are present, for some fixed p: 0 \leq p \leq 1. We call a subgraph H of G on m vertices \emph{full} if every vertex in H is connected by an edge to at least p(m-1) other vertices in H. In other words, a subgraph H is full if its minimum degree is at least the expected average degree for an m-vertex subgraph. If we think of the graph G as representing a network, then its full subgraphs correspond to `cliquey’ communities within that network.

In this talk I shall consider the question of how large a full subgraph I can be guaranteed to find. Improving on earlier work of Erd\H{o}s, \L uczak and Spencer, I shall give a simple algorithm for finding full subgraphs of order at least n^{2/3}, and show that this order is best possible for infinitely many values of p.

Joint work with Klas Markstr\”om and Jacques Verstra\”ete.

Pedro Domingos; Recursive Decomposition for Nonconvex Optimization

CORE Series
Tuesday, November 24, 2015, 4:00 pm
Gowen Hall, Room 201
Pedro Domingos,University of Washington.
Recursive Decomposition for Nonconvex Optimization

ABSTRACT: Most real-world continuous optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be brought to bear. Based on this, we propose a problem-decomposition approach to nonconvex optimization. Similarly to DPLL-style SAT solvers and recursive conditioning in probabilistic inference, our algorithm, RDIS, recursively sets variables so as to simplify and decompose the objective function into approximately independent subfunctions, until the remaining functions are simple enough to be optimized by standard techniques like gradient descent. The variables to set are chosen by graph partitioning, ensuring decomposition whenever possible. We show analytically that RDIS can solve a broad class of nonconvex optimization problems exponentially faster than gradient descent with random restarts. Experimentally, RDIS outperforms standard techniques on problems like structure from motion and protein folding.

Bio: Pedro Domingos is a professor of computer science at the University of Washington and the author of “The Master Algorithm”. He is a winner of the SIGKDD Innovation Award, the highest honor in data science. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and has received a Fulbright Scholarship, a Sloan Fellowship, the National Science Foundation’s CAREER Award, and numerous best paper awards. He received his Ph.D. from the University of California at Irvine and is the author or co-author of over 200 technical publications. He has held visiting positions at Stanford, Carnegie Mellon, and MIT. He co-founded the International Machine Learning Society in 2001. His research spans a wide variety of topics in machine learning, artificial intelligence, and data science, including scaling learning algorithms to big data, maximizing word of mouth in social networks, unifying logic and probability, and deep learning.