Category Archives: Winter 2013

Neeraj Kumar; Leafsnap – A Computer Vision System for Automatic Plant Species Identification

March 12, 2013, 4:00pm
Padelford C-401
Neeraj Kumar, Department of Computer Science and Engineering, University of Washington
Leafsnap – A Computer Vision System for Automatic Plant Species Identification

Abstract: Botanists in the field are racing to capture the complexity of Earth’s flora before climate change and development erase their living record. To greatly speed up the process of plant species identification, collection, and monitoring, we have built and publicly released the first hand-held botanical identification system. The system — called Leafsnap — identifies tree species from photographs of their leaves. With nearly a million downloads, we believe it is the most widely deployed non-commercial computer vision system in the world. In this talk, I will describe the development of this system from its early motivations to its current success. In particular, I will cover our efforts to digitize the Smithsonian’s collection of type specimens of 100,000 plant species, and the subsequent creation of an app for public use. The key technical pieces of the app are segmenting the leaf from the background, computing curvature features robustly along the boundary of the segmented leaf, and performing identification using these extracted features. I’ll also describe the insights learned from creating and deploying such a large-scale system to a non-technical set of users. Finally, I’ll try to place this work in the larger context of the vision community, in particular within the burgeoning “fine-grained visual recognition” area.

Ben Recht; How to make predictions when you’re short on information

February 26, 2013, 3:30pm
EEB 105
Ben Recht, Computer Sciences Department, University of Wisconsin
How to make predictions when you’re short on information

Abstract: With the advent of massive social networks, exascale computing, and high-throughput biology, researchers in every scientific department now face profound challenges in analyzing, manipulating and identifying behavior from a deluge of noisy, incomplete data. In this talk, I will present a unifying framework to make such data analysis tasks less sensitive to corrupted and missing data by exploiting domain specific knowledge and prior information about structure. Specifically, I will show that when a signal or system of interest can be represented by a combination of a few simple building blocks–called atoms–it can be identified with dramatically fewer sensors and accelerated acquisition times. For example, a few principal factors can determine preferences across a user-base, a small number of genes may constitute the signature of a disease, and a sum of a few permutations can summarize the ranking of sports teams. In each application, the challenge lies not only in defining the appropriate set of atoms, but also in estimating the most parsimonious combination of atoms that agrees with a small set of measurements. This talk advances a framework for transforming notions of simplicity and latent low-dimensionality into convex optimization problems. My approach builds on the recent success of generalizing compressed sensing to matrix completion, creating a unified framework that greatly extends the catalog of objects and structures recoverable from partial information. This framework provides a standardized methodology to sharply bound the number of observations required to robustly estimate a variety of structured models. It also enables focused algorithmic development that can be deployed in many different applications, a variety of which I will detail in this talk. I will close by demonstrating how this framework provides the abstractions necessary to scale these optimization algorithms to the massive data sets we now commonly acquire.

Venkat Chandrasekaran; Computational and Statistical Tradeoffs via Convex Relaxation

February 19, 2013, 4:00pm
Padelford C-401
Venkat Chandrasekaran, Department of Computing and Mathematical Sciences, Caltech
Computational and Statistical Tradeoffs via Convex Relaxation

Abstract: In modern data analysis, one is frequently faced with statistical inference problems involving massive datasets. Processing such large datasets is usually viewed as a substantial computational challenge. However, if data are a statistician’s main resource then access to more data should be viewed as an asset rather than as a burden. We describe a computational framework based on convex relaxation to reduce the computational complexity of an inference procedure when one has access to increasingly larger datasets. Convex relaxation techniques have been widely used in theoretical computer science as they give tractable approximation algorithms to many computationally intractable tasks. We demonstrate the efficacy of this methodology in statistical estimation in providing concrete time-data tradeoffs in a class of denoising problems. Thus, convex relaxation offers a principled approach to exploit the statistical gains from larger datasets to reduce the runtime of inference algorithms. (Joint work with Michael Jordan.)

James Pfeiffer; The K_i Cover Problem with Semidefinite Programming

February 12, 2013, 4:00pm
Padelford C-401
James Pfeiffer, Department of Mathematics, University of Washington
The K_i Cover Problem with Semidefinite Programming

Abstract: The K_i cover problem is a generalization of the stable set problem to avoid larger cliques. Motivated by the Lovasz theta relaxation of the stable set polytope, we apply the related theta body relaxation of an algebraic variety to the K_i cover polytope. We show that for certain families of facets, the approximation is exact. For the triangle free problem (i=3), we prove a slow convergence result, and a bound on the integrality gap. This is joint work with Joao Gouveia.

Thomas Rothvoss; Approximating Bin Packing within O(log OPT * log log OPT) bins

February 1, 2013, 2:30pm LOW 102 (nonstandard date and time).
Thomas Rothvoss (faculty candidate), Department of Mathematics, MIT.

Abstract: For bin packing, the input consists of n items with sizes s_1,…,s_n in [0,1] which have to be assigned to a minimum number of bins of size 1. The seminal Karmarkar-Karp algorithm from 1982 produces a solution with at most OPT + O(log^2 OPT) bins in polynomial time, where OPT denotes the value of the optimum solution. I will describe the first improvement in 3 decades and show that one can find a solution of cost OPT + O(log OPT * log log OPT) in polynomial time. This is achieved by rounding a fractional solution to the Gilmore-Gomory LP relaxation using the Entropy Method from discrepancy theory. I will also survey other results of mine from discrete mathematics and combinatorial optimization concerning: an approach to the Hirsch conjecture on the diameter of polytopes; the Chvatal rank of polytopes; the extension complexity of 0/1 polytopes; and the approximability of the Steiner tree problem.

http://www.math.washington.edu/Seminars/coll_abstracts.php#21

Dmitriy Drusvyatskiy; Slope and Geometry in Variational Mathematics

January 29, 2:30pm MEB 246 (nonstandard date and time)
Dmitriy Drusvyatskiy (faculty candidate), School of Operations Research and Information Engineering, Cornell.

Abstract: Various notions of the “slope” of a real-valued function pervade optimization, and variational mathematics more broadly. In the semi-algebraic setting – an appealing model for concrete variational problems – the slope is particularly well-behaved. This talk sketches a variety of surprising applications, illustrating the unifying power of slope. Highlights include error bounds for level sets, the existence and regularity of steepest descent curves in complete metric spaces (following Ambrosio et al.), transversality and the convergence of von Neumann’s alternating projection algorithm, and the geometry underlying nonlinear programming active-set algorithms. This talk will be self-contained, requiring no familiarity with variational analysis, optimization theory, or semi-algebraic geometry. Joint work with A. Daniilidis (Barcelona), A.D. Ioffe (Technion), M. Larsson (Lausanne), A.S. Lewis (Cornell).

http://www.math.washington.edu/Seminars/coll_abstracts.php#129

Karthik Mohan; Structured Learning of Gaussian Graphical Models

January 22, 2013, 4:00pm
Karthik Mohan, Department of Electrical Engineering, University of Washington.

Abstract: We consider the problem of estimating multiple gaussian graphical models given data in the high-dimensional setting and also given structured prior knowledge on the how the models interact with each other. This problem has applications in computational biology, social networks, etc. To encourage the structured learning of graphical models, we construct a regularizer based on group sparsity, called the structured overlap norm. The regularizer is then used to construct a family of optimization problems which when solved, promote graphical models with the desired structure. To solve the resulting set of optimization problems, we propose to use the ADMM algorithm. Standard first order methods such as the sub-gradient method, proximal-gradient method, etc are inefficient due to the structure of our regularizer. We also develop a set of necessary and sufficient conditions for the optimal solution to have a block-diagonal structure. This enables breaking down the optimization problem into multiple sub-problems leading to significant gains in computation without loss of accuracy. Finally, we present some preliminary numerical results that show that the solution to the optimization problem does indeed promote graphical models with the desired structure.