By Cem Tekin, University of California, Los Angeles, USA, cmtkn@ucla.edu | Mingyan Liu, University of Michigan, USA, mingyan@umich.edu
In this monograph we provided a tutorial on a family of sequential learning and decision problems known as the multi-armed bandit problems. We introduced a wide range of application scenarios for this learning framework, as well as its many different variants. The more detailed discussion has focused more on the stochastic bandit problems, with rewards driven by either an IID or a Markovian process, and when the environment consists of a single or multiple simultaneous users. We also presented literature on learning of MDPs, which captures coupling among the evolution of different options that a classical MAB problem does not.
This monograph provides a tutorial on a family of sequential learning and decision problems known as the multi-armed bandit problems. In such problems, any decision serves the purpose of exploring or exploiting or both. This balancing act between exploration and exploitation is characteristic of this type of “learning-on-the-go” problem, in which we have to instantaneously apply what we have learned so far, even as we continue to learn. The authors give an in-depth introduction to the technical aspects of the theory of decision-making technologies. The range is comprehensive and covers topics that have applications in many networking systems. These include Recommender systems, Ad Placement systems, the smart grid, and clinical trials.
Online Learning Methods for Networking is essential reading for students working in networking and machine learning. Designers of many network-based systems will find it a valuable resource for improving their technology.