Foundations and Trends® in Machine Learning > Vol 15 > Issue 5

Risk-Sensitive Reinforcement Learning via Policy Gradient Search

By Prashanth L. A., Indian Institute of Technology Madras, India, prashla@cse.iitm.ac.in | Michael C. Fu, University of Maryland, College Park, USA, mfu@umd.edu

 
Suggested Citation
Prashanth L. A. and Michael C. Fu (2022), "Risk-Sensitive Reinforcement Learning via Policy Gradient Search", Foundations and TrendsĀ® in Machine Learning: Vol. 15: No. 5, pp 537-693. http://dx.doi.org/10.1561/2200000091

Publication Date: 15 Jun 2022
© 2022 Prashanth L. A. and M. C. Fu
 
Subjects
Reinforcement learning,  Optimization,  Learning and statistical methods,  Markov Decision Processes,  Risk analysis,  Simulation,  Stochastic optimization
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Markov Decision Processes
3. Risk Measures
4. Background on Policy Evaluation and Gradient Estimation
5. Policy Gradient Templates for Risk-sensitive RL
6. MDPs with Risk as the Constraint
7. MDPs with Risk as the Objective
8. Conclusions and Future Challenges
Acknowledgements
References

Abstract

The objective in a traditional reinforcement learning (RL) problem is to find a policy that optimizes the expected value of a performance metric such as the infinite-horizon cumulative discounted or long-run average cost/reward. In practice, optimizing the expected value alone may not be satisfactory, in that it may be desirable to incorporate the notion of risk into the optimization problem formulation, either in the objective or as a constraint. Various risk measures have been proposed in the literature, e.g., exponential utility, variance, percentile performance, chance constraints, value at risk (quantile), conditional value-at-risk, prospect theory and its later enhancement, cumulative prospect theory.

In this monograph, we consider risk-sensitive RL in two settings: one where the goal is to find a policy that optimizes the usual expected value objective while ensuring that a risk constraint is satisfied, and the other where the risk measure is the objective. We survey some of the recent work in this area specifically where policy gradient search is the solution approach. In the first risk-sensitive RL setting, we cover popular risk measures based on variance, conditional valueat- risk, and chance constraints, and present a template for policy gradient-based risk-sensitive RL algorithms using a Lagrangian formulation. For the setting where risk is incorporated directly into the objective function, we consider an exponential utility formulation, cumulative prospect theory, and coherent risk measures. This non-exhaustive survey aims to give a flavor of the challenges involved in solving risk-sensitive RL problems using policy gradient methods, as well as outlining some potential future research directions.

DOI:10.1561/2200000091
ISBN: 978-1-63828-026-2
170 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-63828-027-9
170 pp. $145.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Markov Decision Processes
3. Risk Measures
4. Background on Policy Evaluation and Gradient Estimation
5. Policy Gradient Templates for Risk-sensitive RL
6. MDPs with Risk as the Constraint
7. MDPs with Risk as the Objective
8. Conclusions and Future Challenges
Acknowledgements
References

Risk-Sensitive Reinforcement Learning via Policy Gradient Search

Reinforcement learning (RL) is one of the foundational pillars of artificial intelligence and machine learning. An important consideration in any optimization or control problem is the notion of risk, but its incorporation into RL has been a fairly recent development. This monograph surveys research on risk-sensitive RL that uses policy gradient search.

The authors survey some of the recent work in this area specifically where policy gradient search is the solution approach. In the first risk-sensitive RL setting, they cover popular risk measures based on variance, conditional value at-risk and chance constraints, and present a template for policy gradient-based risk-sensitive RL algorithms using a Lagrangian formulation. For the setting where risk is incorporated directly into the objective function, they consider an exponential utility formulation, cumulative prospect theory, and coherent risk measures.

Written for novices and experts alike the authors have made the text completely self-contained but also organized in a manner that allows expert readers to skip background chapters. This is a complete guide for students and researchers working on this aspect of machine learning.

 
MAL-091