Foundations and Trends® in Optimization > Vol 7 > Issue 1

An Invitation to Deep Reinforcement Learning

By Bernhard Jaeger, University of Tübingen, Germany, bernhard.jaeger@uni-tuebingen.de | Andreas Geiger, University of Tübingen, Germany, a.geiger@uni-tuebingen.de

 
Suggested Citation
Bernhard Jaeger and Andreas Geiger (2024), "An Invitation to Deep Reinforcement Learning", Foundations and TrendsĀ® in Optimization: Vol. 7: No. 1, pp 1-80. http://dx.doi.org/10.1561/2400000049

Publication Date: 10 Dec 2024
© 2024 B. Jaeger and A. Geiger
 
Subjects
Reinforcement learning,  Deep learning,  Online learning,  Stochastic optimization
 

Share

Download article
In this article:
1. Introduction
2. Notation
3. Optimization of Non-Differentiable Objectives
4. Data Collection
5. Off-Policy Reinforcement Learning
6. On-policy Reinforcement Learning
7. Discussion
Acknowledgments
Appendices
References

Abstract

Training a deep neural network to maximize a target objective has become the standard recipe for successful machine learning over the last decade. These networks can be optimized with supervised learning if the target objective is differentiable. However, this is not the case for many interesting problems. Common objectives like intersection over union (IoU), and bilingual evaluation understudy (BLEU) scores or rewards cannot be optimized with supervised learning. A common workaround is to define differentiable surrogate losses, leading to suboptimal solutions with respect to the actual objective. Reinforcement learning (RL) has emerged as a promising alternative for optimizing deep neural networks to maximize non-differentiable objectives in recent years. Examples include aligning large language models via human feedback, code generation, object detection or control problems. This makes RL techniques relevant to the larger machine learning audience. The subject is, however, timeintensive to approach due to the large range of methods, as well as the often highly theoretical presentation. This monograph takes an alternative approach that is different from classic RL textbooks. Rather than focusing on tabular problems, we introduce RL as a generalization of supervised learning, which we first apply to non-differentiable objectives and later to temporal problems. Assuming only basic knowledge of supervised learning, the reader will be able to understand state-of-the-art deep RL algorithms like proximal policy optimization (PPO) after reading this tutorial.

DOI:10.1561/2400000049