Foundations and Trends® in Machine Learning > Vol 8 > Issue 5-6

Bayesian Reinforcement Learning: A Survey

By Mohammad Ghavamzadeh, Adobe Research and INRIA, France, mohammad.ghavamzadeh@inria.fr | Shie Mannor, Technion, Israel, shie@ee.technion.ac.il | Joelle Pineau, McGill University, Canada, jpineau@cs.mcgill.ca | Aviv Tamar, University of California, Berkeley, USA, avivt@berkeley.edu

 
Suggested Citation
Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau and Aviv Tamar (2015), "Bayesian Reinforcement Learning: A Survey", Foundations and TrendsĀ® in Machine Learning: Vol. 8: No. 5-6, pp 359-483. http://dx.doi.org/10.1561/2200000049

Publication Date: 26 Nov 2015
© 2015 M. Ghavamzadeh, S. Mannor, J. Pineau, and A. Tamar
 
Subjects
Reinforcement learning
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Technical Background
3. Bayesian Bandits
4. Model-based Bayesian Reinforcement Learning
5. Model-free Bayesian Reinforcement Learning
6. Risk-aware Bayesian Reinforcement Learning
7. BRL Extensions
8. Outlook
Acknowledgements
Appendices
References

Abstract

Bayesian methods for machine learning have been widely investigated, yielding principled methods for incorporating prior information into inference algorithms. In this survey, we provide an in-depth review of the role of Bayesian methods for the reinforcement learning (RL) paradigm. The major incentives for incorporating Bayesian reasoning in RL are: 1) it provides an elegant approach to action-selection (exploration/ exploitation) as a function of the uncertainty in learning; and 2) it provides a machinery to incorporate prior knowledge into the algorithms. We first discuss models and methods for Bayesian inference in the simple single-step Bandit model. We then review the extensive recent literature on Bayesian methods for model-based RL, where prior information can be expressed on the parameters of the Markov model. We also present Bayesian methods for model-free RL, where priors are expressed over the value function or policy class. The objective of the paper is to provide a comprehensive survey on Bayesian RL algorithms and their theoretical and empirical properties.

DOI:10.1561/2200000049
ISBN: 978-1-68083-088-0
148 pp. $95.00
Buy book (pb)
 
ISBN: 978-1-68083-089-7
148 pp. $250.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Technical Background
3. Bayesian Bandits
4. Model-based Bayesian Reinforcement Learning
5. Model-free Bayesian Reinforcement Learning
6. Risk-aware Bayesian Reinforcement Learning
7. BRL Extensions
8. Outlook
Acknowledgements
Appendices
References

Bayesian Reinforcement Learning

Bayesian methods for machine learning have been widely investigated, yielding principled methods for incorporating prior information into inference algorithms. This monograph provides the reader with an in-depth review of the role of Bayesian methods for the reinforcement learning (RL) paradigm. The major incentives for incorporating Bayesian reasoning in RL are that it provides an elegant approach to action-selection (exploration/exploitation) as a function of the uncertainty in learning, and it provides a machinery to incorporate prior knowledge into the algorithms.

Bayesian Reinforcement Learning: A Survey first discusses models and methods for Bayesian inference in the simple single-step Bandit model. It then reviews the extensive recent literature on Bayesian methods for model-based RL, where prior information can be expressed on the parameters of the Markov model. It also presents Bayesian methods for model-free RL, where priors are expressed over the value function or policy class.

Bayesian Reinforcement Learning: A Survey is a comprehensive reference for students and researchers with an interest in Bayesian RL algorithms and their theoretical and empirical properties.

 
MAL-049