IEOR-DRO Seminar

The IEOR-DRO Seminar is a joint offering from Industrial Engineering & Operations Research (IEOR) and the Decision, Risk and Operations (DRO) Division of the Columbia Business School to feature prominent research on decision-making through optimization, modeling and managing uncertainty, and all aspects of the operations function in firms.

Check out our seminar schedule below for details about upcoming seminars!

Spring 2022 Seminars

Please find the schedule for this semester's seminars below. The schedule also includes the talk details and speaker biography once announced. Once the seminar has passed, the respective recordings will be linked below as well.

March 22nd | Chi Jin, Princeton University

Title: Theoretical Foundations of Multiagent Reinforcement Learning

Abstract: Reinforcement learning (RL) has made substantial empirical progress in solving hard AI challenges in the past few years. A large fraction of these progresses—Go, Dota 2, Starcraft 2, economic simulation, social behavior learning, and so on—come from multi-agent RL, that is, sequential decision making involving more than one agent. While the theoretical study of single-agent RL has a long history and a vastly growing recent interest, Multi-Agent RL (MARL) theory is arguably a newer and less developed field. In this talk, I will present our recent theoretical developments in MARL theory. Starting with basic formulations, we present several provably efficient algorithms for finding equilibria in MARL problems, addressing unique challenges in MARL such as the curse of multiagents, and design of decentralized algorithms.

Bio: Chi Jin is an assistant professor at the Electrical and Computer Engineering department of Princeton University. He obtained his PhD degree in Computer Science at University of California, Berkeley, advised by Michael I. Jordan. His research mainly focuses on theoretical machine learning, with special emphasis on nonconvex optimization and reinforcement learning. My representative work includes proving noisy gradient descent escape saddle points efficiently, and proving the efficiency of Q-learning and least-squares value iteration when combined with optimism in reinforcement learning.


March 24th | Vijay Vazirani, University of California, Irvine

More details to follow. 

March 29th | Hummy Song, Wharton

More details to follow. 

April 5th | Kamesh Munagala, Duke University

Title: Group Fairness in Network Design and Combinatorial Optimization

Abstract: Consider the following classical network design model. There are n clients in a multi-graph with a single sink node. Each edge has a cost to buy, and a length if bought; typically, costlier edges have smaller lengths. There is a budget B on the total cost of edges bought. Given a set of bought edges, the distance of a client to the sink is the shortest path according to the edge lengths. Such a model captures buy-at-bulk network design and facility location as special cases. Rather than pose this as a standard optimization problem, we ask a different question: Suppose a provider is allocating budget B to build this network, how should it do so in a manner that is fair to the clients? We consider a classical model of group fairness termed the core in cooperative game theory: If each client contributes its share B/n amount of budget as tax money, no subset of clients should be able to pool their tax money to deviate and build a different network that simultaneously improves all their distances to the sink. The question is: Does such a solution always exist, or approximately exist? We consider an abstract “committee selection” model from social choice literature that captures not only the above problem, but other combinatorial optimization problems where we need to provision public resources subject to combinatorial constraints, in order to provide utility to clients. For this general model, we show that an approximately fair solution always exists, where the approximation scales down the tax money each client can use for deviation by only a constant factor. Our existence result relies on rounding an interesting fractional relaxation to this problem. In certain cases such as the facility location problem, it also implies a polynomial time algorithm. We also show that similar results when the approximation is on the utility that clients derive by deviating. 

Bio: Kamesh Munagala is Professor of Computer Science at Duke University. He obtained his Ph.D. (2003) and M.S. (2002) in Computer Science from Stanford University, and B.Tech. (1998) in Computer Science and Engineering from IIT Bombay. He subsequently spent a year as a postdoc in Biochemistry at Stanford before joining Duke CS in 2004.  He is broadly interested in algorithm design and discrete optimization. His work is both methodological, encompassing approximation and online algorithms, algorithmic fairness, and algorithmic game theory, as well as applied to domains such as e-commerce and data analysis.  He is an ACM Distinguished Scientist (2019); a recipient of the NSF CAREER Award (2008) and the Alfred P. Sloan Research Fellowship (2009); and co-author on the best papers at the WINE 2018 and WWW 2009 conferences. He  currently serves as the Associate Chair of the CS Department at Duke.


April 12th | Rad Niazadeh, University of Chicago

More details to follow. 

April 19th | Sebastien Bubeck, Microsoft Research

Title: Set Chasing, with an application to online shortest path

Abstract: Since the late 19th century, mathematicians have realized the importance and generality of selection problems: given a collection of sets, select an element in each set, possibly in a ``nice” way. Of particular importance in computer science/operations research is the scenario where the ground set is a metric space, in which case it is natural to ask for *Lipschitz* selection. In this talk I will describe a far-reaching extension of this classical Lipschitz selection problem to an *online* setting, where sets are streaming to the selector. I will show how Riemannian gradient descent (aka mirror descent) can be used to approach this type of problems. I will illustrate the power of the framework by solving a long-standing problem in online shortest path known as layered graph traversal (introduced by Papadimitriou and Yannakakis in 1989).

Bio: Sébastien Bubeck is a researcher in the Theory Group at Microsoft Research. Prior to joining MSR, he was an assistant professor in the ORFE department at Princeton University. His research has won a few awards, including the 2010 Jacques Neveu prize (best French PhD in probability/statistics) and a 2015 Sloan Fellowship in Computer Science.


April 26th | Ozge Sahin, Johns Hopkins University

Title: Product Bundling: Assortment Selection in Scale and Consumer Behavior

Abstract: Bundle promotions can take various forms in how retailers offer them to the consumers (i.e., after a purchase incidence, as advertised deals or personalized promotions) and how they are structured (e.g., buy one get one free, spend $100 get 25% off or buy 3 get $20 off). They have become increasingly popular among brick-mortar and online retailers due to the potential in increasing revenue and profit. Designing an optimal set of bundle promotions in scale and understanding consumer response to these promotions are very relevant, but challenging problems. In this talk, we will share our findings from two working papers on the topic.

 In the first paper, we study the optimal design of bundle promotion assortments in scale. The general bundle set selection problem is NP-hard. We develop pseudo-polynomial and polynomial-time algorithms for special cases of the problem. We use the insights from the special cases to develop a linear time approximation with a performance guarantee under some assumptions. We test our algorithms in medium to large instances and show that their performance is close to optimal. We also present insights on when bundle promotions are most useful to the firm by considering the interaction among products. 

In the second part of the talk, we present our empirical findings on how bundle promotions affect consumer purchase and return behavior compared to simple markdowns. Using the data from one of the largest apparel retailers in Turkey, we find that bundle promotions not only increase the purchase incidence for certain categories, but also decrease the return probability of products in the category, controlling for price, discount depth and item characteristics.

We find that returns of products offered with a bundle discount decrease on average by 10% to 24% for 20% to 40% bundle discount compared to returns of the same products if purchased with a regular percentage discount. We also show that accounting for the impact of bundle discounts on consumer returns significantly changes the optimal promotion strategy for the firm, and products assigned to bundle promotions. This translates to up to 4% increase in retailer’s profits compared to the promotion strategy that do not account for returns.

 Joint work with Ali Fattahi, Sahar Hemmati, Wedad Elmaghraby

Bio: Ozge Sahin is a Professor of Operations Management and Business Analytics at the Johns Hopkins Carey Business School. She got her PhD and MS degrees in Operations Research from Columbia University. She is currently serving as the Faculty Director of Innovation Field Projects in the Johns Hopkins Carey Business School MBA Program. She teaches Operations Management, Business Analytics and Advanced Business Analytics in masters and MBA programs.

Her research interests include pricing, marketplace analytics with emphasis on consumer behavior, and strategic capacity and supply chain management. Some of her recent research projects include analysis of pricing strategies with consumer search cost, service pricing, identifying consumer biases and heuristics for sequential decision problems, conditional promotions in retail and pharmaceutical pricing. Ozge has published papers in academic journals includingManagement Science, Operations Research, Manufacturing and Service Operations Management, among others. She serves as the Department Editor of Revenue Management and Pricing Area of Decision Sciences Journal, an Associate Editor for Manufacturing and Service Operations Management, and an Associate Editor for Operations Research. She served as a consultant to companies including Lucent Technologies, Amadeus SAS and Amazon. She is an Amazon Scholar in the Pricing Research and Machine Learning group at Amazon since 2019.

Recording: To be announced.

May 3rd | Jelena Diakonikolas, University of Wisconsin

Title: Halpern Iteration and Min-Max Optimization

Abstract: Halpern iteration is a classical iteration developed by Benjamin Halpern in 1967 as a method for solving fixed point operator equations. Over the past decades, the convergence of Halpern iteration has been studied in both asymptotic and nonasymptotic terms. However, its study was predominantly confined to operator equations. This trend changed a few years ago, when the connection between fixed point equations and monotone inclusion problems was utilized in my work to show that Halpern iteration leads to either optimal or near-optimal convergence guarantees for essentially all standard classes of problems with Lipschitz operators. The setting of monotone inclusion---which corresponds to finding (or approximating) a zero of a monotone operator---is of particular relevance here, as it generalizes variational inequality (equilibria) problems and provides guarantees in terms of both optimality gap and stationarity (norm of the gradient in unconstrained settings) in min-max optimization. Both variational inequalities and min-max optimization have undergone a renaissance in recent years due to their modeling power in applications such as adversarial training of machine learning models and distributionally robust optimization. I will discuss recent developments related to the use of Halpern iteration, including in settings where the operator is possibly non-monotone and can be accessed only via stochastic queries. To date, the guarantees that can be obtained with variants of Halpern iteration remain unmatched by any alternative algorithms and are all provided for the last iterate.

 Bio: Jelena Diakonikolas is an assistant professor in the Department of Computer Sciences at UW-Madison. Prior to joining UW-Madison, she held postdoctoral positions at UC Berkeley and Boston University. She received her PhD from Columbia University. Her research interests are in efficient optimization methods for large scale problems and connections between optimization and other areas such as dynamical systems and Markov Chain Monte Carlo methods. Jelena is a recipient of a Simons-Berkeley Research Fellowship, the Morton B. Friedman Prize for Excellence at Columbia Engineering, and a Qualcomm Innovation Fellowship.


IEOR-DRO Seminar Mailing list

Join the IEOR-DRO Seminar mailing list!