IEOR-DRO Seminar
The IEOR-DRO Seminar is a joint offering from Industrial Engineering & Operations Research (IEOR) and the Decision, Risk and Operations (DRO) Division of the Columbia Business School to feature prominent research on decision-making through optimization, modeling and managing uncertainty, and all aspects of the operations function in firms.
Check out our seminar schedule below for details about upcoming seminars!
Spring 2023 Seminars
Please find the schedule for this semester's seminars below. The schedule also includes the talk details and speaker biography once announced. Once the seminar has passed, the respective recordings will be linked below as well.
January 31st | Jeff Hong, Fudan University
Title: The (Surprising) Rate Optimality of Greedy Algorithms for Large-Scale Ranking and Selection
Abstract: Large-scale ranking-and-selection (R&S), which aims to select the best alternative with the largest mean performance from a finite set of alternatives, has emerged as an important research topic in simulation optimization and multi-armed bandit. An ideal large-scale R&S algorithm should be rate optimal, i.e., the total sample size required to deliver an asymptotically non-zero probability of correct selection (PCS) grows linearly in the number of alternatives. We discover that the naïve greedy algorithm that keeps sampling the alternative with the largest running sample mean performs surprisingly well and appears rate optimal in solving large-scale R&S problems. We develop a boundary-crossing perspective and prove that the greedy algorithm is indeed rate optimal, and we further show that the obtained PCS lower bound is tight asymptotically for the slippage configuration with a common variance. To develop a practically competitive algorithm that harnesses the rate optimality of the greedy algorithm, we propose the explore- first greedy (EFG) algorithm that adds an exploration phase to the greedy algorithm. We show that the new algorithm is simple, rate optimal and consistent. The numerical study demonstrates that the EFG algorithm performs well compared to the existing algorithms.
Bio: Jeff Hong received his bachelor’s and Ph.D. degrees from Tsinghua University and Northwestern University, respectively. He is currently with Fudan University, holding the positions of Fudan Distinguished Professor, Hongyi Chair Professor, Chair of Department of Management Science in School of Management, and Associate Dean of School of Data Science. He was Chair Professor of Management Sciences at City University of Hong Kong, and Professor of Industrial Engineering and Logistics Management at the Hong Kong University of Science and Technology. Prof. Hong’s research interests include stochastic simulation, stochastic optimization, risk management and supply chain management. He is currently the Simulation Area Editor of Operations Research, an Associate Editor of Management Science and ACM Transactions on Modeling and Computer Simulation, and he was the President of INFORMS Simulation Society from 2020 to 2022.
February 28th | Jean Pouget-Abadie, Google
Abstract: When the treatment assignment of one unit affects the outcome of another, we say there is interference. Interference is especially prevalent in marketplaces, where buyer and seller interactions lead to complex dependence structures. As a violation of the stable unit treatment value assumption (SUTVA), the presence of interference can lead to bias of standard estimators under naive randomized designs. In this talk, we will cover a set of design and estimation paradigms to conduct causal inference research in a bipartite graph setting, inspired from (but not limited to) marketplace experiments, with specific attention to clustered randomized designs under different randomization constraints and bias corrections to standard estimators
Bio: http://jean.pouget-abadie.com/
March 7th | Sergei Vassilvitskii, Google
Title: Going beyond worst-case analysis using predictions
Abstract: The theoretical study of algorithms and data structures has been bolstered by worst-case analysis, where we prove bounds on the running time, space, approximation ratio, competitive ratio, or other measure that holds even in the worst case. Worst-case analysis has proven invaluable for understanding aspects of both the complexity and practicality of algorithms, providing useful features like the ability to use algorithms as building blocks and subroutines with a clear picture of the worst-case performance. More and more, however, the limitations of worst-case analysis become apparent and create new challenges. In practice, we often do not face worst-case scenarios, and the question arises of how we can tune our algorithms to work even better on the kinds of instances we are likely to see, while ideally keeping a rigorous formal framework similar to what we have developed through worst-case analysis.
We examine a recent trend that develops algorithms parameterized by additional parameters which capture "the kinds of instances we are likely to see," and obtains a finer grained analysis of algorithms' performance. We will give examples of re-analyzing classical algorithms through this lens, as well as developing new algorithms that expose new structural insights about the problems.
Based on work with Kareem Amin, Travis Dick, Michael Dinitz, Sungjin Im, Misha Khodak, Silvio Lattanzi, Thomas Lavastida, Thodoris Lykouris, and Andres Munoz Medina
March 21st | Lihua Lei, Stanford University (GSB)
Title: Learning from a Biased Sample
Abstract: The empirical risk minimization approach to data-driven decision making assumes that we can learn a decision rule from training data drawn under the same conditions as the ones we want to deploy it under. However, in a number of settings, we may be concerned that our training sample is biased, and that some groups (characterized by either observable or unobservable attributes) may be under- or over-represented relative to the general population; and in this setting empirical risk minimization over the training set may fail to yield rules that perform well at deployment. Building on concepts from distributionally robust optimization and sensitivity analysis, we propose a method for learning a decision rule that minimizes the worst-case risk incurred under a family of test distributions whose conditional distributions of outcomes given covariates differ from the conditional training distribution by at most a constant factor, and whose covariate distributions are absolutely continuous with respect to the covariate distribution of the training data. We apply a result of Rockafellar and Uryasev to show that this problem is equivalent to an augmented convex risk minimization problem. We give statistical guarantees for learning a robust model using the method of sieves and propose a deep learning algorithm whose loss function captures our robustness target. We empirically validate our proposed method in simulations and a case study with the MIMIC-III dataset.
Bio: Lihua Lei is assistant professor of Economics at Stanford Graduate School of Business, of Statistics (by courtesy) and a faculty fellow at Institute for Economic Policy Research. He obtained his PhD from UC Berkeley. His research lies at the intersection of econometrics, causal inference, high dimensional statistics and optimization.
March 28th | Xin Chen, Georgia Tech
Title: Stochastic Optimization with Decisions Truncated by Random Variables and Its Applications in Network Revenue Management
Abstract: Motivated by network revenue management problems using booking limit control and inventory models with random capacities, we study a class of stochastic optimization problems in which the decision variables are truncated by random variables, leading to non-convex programs due to the truncation. Interestingly, under some conditions, convex reformulations can be constructed. Leveraging a convex reformulation, we develop a stochastic gradient-based algorithm in the original non-convex space which achieves sample and gradient complexities matching these for solving stochastic convex optimization problems. Our numerical experiments on air-cargo network revenue management problems demonstrate the superior performance of booking limit control policies and our proposed algorithm vs. state-of-the-art bid-price-based control policies.
Bio: Xin Chen is a James C. Edenfield chair and professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. Prior to this appointment, he was a professor of industrial engineering at the University of Illinois at Urbana-Champaign. His research interest lies in optimization, data analytics, revenue management and supply chain management. He received the Informs revenue management and pricing section prize in 2009. He is the coauthor of the book “The Logic of Logistics: Theory, Algorithms, and Applications for Logistics and Supply Chain Management (Second Edition, 2005, & Third Edition, 2014)”, and serving as the department editor of logistics and supply chain management of Naval Research Logistics and an associate editor of several leading journals including Operations Research, Management Science, and Production and Operations Management.
April 4th | Negin Golrezaei, MIT Sloan
TBD
April 11th | Elaine Shi, CMU
TBD
April 18th | Rob Bray, Northwestern University (Kellogg)
TBD
April 25th | Alan Scheller-Wolf, CMU (Tepper)
TBD
IEOR-DRO Seminar Mailing list
Join the IEOR-DRO Seminar mailing list!