Category Archives: Fall 2016

David Shmoys; Models and Algorithms for the Operation and Design of Bike-Sharing Systems

CORE Series (Joint with CSE Theory Seminar)
**Thursday**, Dec 1, 2016
11:00AM-12:00PM in CSE 403
David Shmoys, Cornell University

TITLE: Models and Algorithms for the Operation and Design of Bike-Sharing Systems

ABSTRACT: New York launched the largest bike-sharing system in North America in May 2013. We have worked with Citibike, using analytics and optimization to change how they manage the system. Huge rush-hour usage imbalances the system; we answer both of the questions: where should bikes be at the start of a day and how can we mitigate the imbalances that develop? We will survey the analytics we have employed for the former question, where we developed an approach based on continuous-time Markov chains combined with integer programming models to compute daily stocking levels for the bikes, as well as methods employed for optimizing the capacity of the stations. For the question of mitigating the imbalances that result, we will describe both heuristic methods and approximation algorithms that guide both mid-rush hour and overnight rebalancing, as well as for the positioning of corrals, which have been one of the most effective means of creating adaptive capacity in the system.

This is joint work with Daniel Freund, Shane Henderson, Nanjing Jian, Ashkan Nourozi-Fard, Eoin O’Mahoney, and Alice Paul.

 

David B. Shmoys pic

BIO: David Shmoys is the Laibe/Acheson Professor at Cornell University in the School of Operations Research and Information Engineering, and also the Department of Computer Science at Cornell University, and is currently the Director of the School of Operations Research and Information Engineering. Shmoys’s research has focused on the design and analysis of efficient algorithms for discrete optimization problems, with applications including scheduling, inventory theory, computational biology, and most recently, on stochastic optimization models and algorithms in computational sustainability. His graduate-level text, The Design of Approximation Algorithms, co-authored with David Williamson, was awarded the 2013 INFORMS Lanchester Prize. He is an INFORMS Fellow, a Fellow of the ACM, a SIAM Fellow, and was an NSF Presidential Young Investigator; he has served on numerous editorial boards, and is currently Editor-in-Chief of Research in the Mathematical Sciences (for theoretical computer science) and an Associate Editor of Mathematics of Operations Research.

Save

Jonathan Jonker; Singular Kalman Smoothing

Nov 22, 2016, 4pm
SAV 131
Jonathan Jonker, Department of Mathematics, University of Washington

Abstract: The Kalman filter is a process of estimating unknown variables using a sequence of known measurements. It is widely used in fields such as navigation, tracking and economics.  While this is not classically an optimization problem, it has been shown that by using a maximum a posteriori estimate it can be reformulated to be a least squares minimization problem.  This reformulation assumes that certain covariance matrices are nonsingular; we show that this assumption may be dropped.  Once the optimization problem is derived, we discuss solution methods and provide numerical examples.

Joint work with Sasha Aravkin.

Rainer Sinn; Positive semidefinite matrix completion and sums of squares

Nov 15, 2016, 4pm
SAV 131
Rainer Sinn, School of Mathematics, Georgia Institute of Technology

Abstract: I will discuss the positive semidefinite matrix completion problem arising in combinatorial statistics and explain how we can use results in algebraic geometry (or combinatorial commutative algebra) to understand it better. The object linking the two different areas is the cone of sums of squares and its properties as a convex cone.

Jonathan Jedwab; How many mutually unbiased bases can exist in complex space of dimension d?

*Monday* Nov 21, 2016, 3:30pm.
LOW 117
Jonathan Jedwab, Department of Mathematics, Simon Fraser University

Abstract: A set of m mutually unbiased bases in C^d comprises md unit vectors in C^d, partitioned into m orthonormal bases for C^d such that the pairwise angle between all vectors from distinct bases is arccos(1/sqrt(d)). Schwinger noted in 1960 that no information can be obtained when a quantum system is prepared in a state belonging to one of the bases of such a set but is measured with respect to any other one of the bases. This property can be exploited in secure quantum key exchange, quantum state determination, quantum state reconstruction, and detection of quantum entanglement.

The central problem is to determine the largest number \mu(d) of mutually unbiased bases that can exist in C^d. It has been known for 40 years that \mu(d) <= d + 1, but a construction achieving the upper bound d + 1 is known only when d is a prime power. Despite considerable effort and a huge literature, there has been little progress in determining \mu(d) when d is not a prime power, with the notable exception of Weiner’s 2013 result that \mu(d) never equals d. Even the smallest non-prime-power case d = 6 remains baffling: all that is known is that \mu(6) ∈ {3, 4, 5, 7}.

I shall give an overview of the motivation and current state of knowledge for this problem, and describe some new insights involving combinatorial designs. No prior knowledge will be assumed.

This is joint work with Lily Yen of Capilano University.

Yin-Tat Lee; Geodesic Walks

Nov 29, 2016, 4pm
SAV 131
Yin-Tat Lee, Microsoft Research

Abstract: We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from polytopes, a problem that is even more general than linear programming. The walk is a discrete-time simulation of a stochastic differential equation (SDE) on the Riemannian manifold. The resulting sampling algorithm for polytopes mixes in O*(mn^{3/4}) steps for a polytope in R^n specified by m inequalities. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes.

Harishchandra Ramadas; A Deterministic Algorithm for Discrepancy Minimization

Oct 11, 2016, 4pm
SAV 131
Harishchandra Ramadas, Department of Mathematics, University of Washington

Given a collection of n items and n subsets of this collection, we want to color each object either red or blue so that in every subset, the number of red items is close to the number of blue items. In 1985, Spencer showed that there exists a coloring so that the “discrepancy” of each subset — the difference between the number of red and blue items — is at most sqrt(n). We present a deterministic algorithm based on the multiplicative weights update method to find such a coloring in polynomial time.

Based on joint work with Avi Levy and Thomas Rothvoss. No background will be assumed.

Reza Eghbali; Worst Case Competitive Analysis of Online Algorithms for Conic Optimization

Nov 1, 2016, 4pm
SAV 131
Reza Eghbali, Department of Electrical Engineering, University of Washington

Abstract:  Online optimization covers problems such as online resource allocation, online bipartite matching, adwords (a central problem in e-commerce and advertising), and adwords with separable concave returns. We analyze the worst case competitive ratio of two primal-dual algorithms for a class of online convex (conic) optimization problems that contains the previous examples as special cases defined on the positive orthant. We derive a sufficient condition on the objective function that guarantees a constant worst case competitive ratio (greater than or equal to $\frac{1}{2}$) for monotone objective functions. Using the same framework, we also derive the competitive ratio for problems with objective functions that are not monotone. We provide new examples of online problems on the positive orthant and the positive semidefinite cone that satisfy the sufficient condition. We show how smoothing can improve the competitive ratio of these algorithms, and in particular for separable functions, we show that the optimal smoothing can be derived by solving a convex optimization problem. This result allows us to directly optimize the competitive ratio bound over a class of smoothing functions, and hence design effective smoothing customized for a given cost function.