By David B. Brown, Fuqua School of Business, Duke University, USA, dbbrown@duke.edu | James E. Smith, Tuck School of Business, Dartmouth College, USA, jim.smith@dartmouth.edu
In this monograph, we provide an overview of the information relaxation approach for calculating performance bounds in stochastic dynamic programs (DPs). The technique involves (1) relaxing the temporal feasibility (or nonanticipativity) constraints so the decision-maker (DM) has additional information before making decisions, and (2) incorporating a penalty that punishes the DM for violating the temporal feasibility constraints. The goal of this monograph is to provide a self-contained overview of the key theoretical results of the information relaxation approach as well as a review of research that has successfully used these techniques in a broad range of applications. We illustrate the information relaxation approach on applications in inventory management, assortment planning, and portfolio optimization.
Dynamic Programming (DP) provides a powerful framework for modeling complex decision problems where uncertainty is resolved and decisions are made over time. But it is difficult to scale to complex problems. Monte Carlo simulation methods, however, typically scale well, but typically do not provide a good way to identify an optimal policy or provide a performance bound. To address these restrictions, the authors review the information relaxation approach which works by reducing a complex stochastic DP to a series of scenario-specific deterministic optimization problems solved within a Monte Carlo simulation.
Written in a tutorial style, the authors summarize the key ideas of information relaxation methods for stochastic DPs and demonstrate their use in several examples. They provide a “one-stop-shop” for researchers seeking to learn the key ideas and tools for using information relaxation methods.
This book provides the reader with a comprehensive overview of a powerful technique for use by students, researchers and practitioners.