Risk-Sensitive Reinforcement Learning via Policy Gradient Search

被引:10
|
作者
Prashanth, L. A. [1 ]
Fu, Michael C. [2 ]
机构
[1] Indian Inst Technol Madras, Chennai, Tamil Nadu, India
[2] Univ Maryland, College Pk, MD 20742 USA
来源
关键词
MARKOV DECISION-PROCESSES; ACTOR-CRITIC ALGORITHM; STOCHASTIC-APPROXIMATION; PROSPECT-THEORY; DISCRETE-TIME; NEUTRAL/MINIMAX CONTROL; CONVERGENCE RATE; OPTIMIZATION; UTILITY; COST;
D O I
10.1561/2200000091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The objective in a traditional reinforcement learning (RL) problem is to find a policy that optimizes the expected value of a performance metric such as the infinite-horizon cumulative discounted or long-run average cost/reward. In practice, optimizing the expected value alone may not be satisfactory, in that it may be desirable to incorporate the notion of risk into the optimization problem formulation, either in the objective or as a constraint. Various risk measures have been proposed in the literature, e.g., exponential utility, variance, percentile performance, chance constraints, value at risk (quantile), conditional value-at-risk, prospect theory and its later enhancement, cumulative prospect theory. In this monograph, we consider risk-sensitive RL in two settings: one where the goal is to find a policy that optimizes the usual expected value objective while ensuring that a risk constraint is satisfied, and the other where the risk measure is the objective. We survey some of the recent work in this area specifically where policy gradient search is the solution approach. In the first risk-sensitive RL setting, we cover popular risk measures based on variance, conditional valueat-risk, and chance constraints, and present a template for policy gradient-based risk-sensitive RL algorithms using a Lagrangian formulation. For the setting where risk is incorporated directly into the objective function, we consider an exponential utility formulation, cumulative prospect theory, and coherent risk measures. This non-exhaustive survey aims to give a flavor of the challenges involved in solving risk-sensitive RL problems using policy gradient methods, as well as outlining some potential future research directions.
引用
收藏
页码:537 / 693
页数:157
相关论文
共 50 条
  • [21] State-Augmentation Transformations for Risk-Sensitive Reinforcement Learning
    Ma, Shuai
    Yu, Jia Yuan
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4512 - 4519
  • [22] Risk-sensitive reinforcement learning algorithms with generalized average criterion
    Chang-ming Yin
    Wang Han-xing
    Zhao Fei
    Applied Mathematics and Mechanics, 2007, 28 : 405 - 416
  • [23] Risk-sensitive reinforcement learning algorithms with generalized average criterion
    Yin Chang-ming
    Wang Han-xing
    Zhao Fei
    APPLIED MATHEMATICS AND MECHANICS-ENGLISH EDITION, 2007, 28 (03) : 405 - 416
  • [24] Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach
    Fei, Yingjie
    Yang, Zhuoran
    Wang, Zhaoran
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [25] Risk-sensitive reinforcement learning applied to control under constraints
    Geibel, P
    Wysotzki, F
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2005, 24 : 81 - 108
  • [26] Risk-sensitive reinforcement learning applied to control under constraints
    Geibel, P. (PGEIBEL@UOS.DE), 1600, American Association for Artificial Intelligence (24):
  • [27] Risk-Sensitive Portfolio Management by using Distributional Reinforcement Learning
    Harnpadungkij, Thammasorn
    Chaisangmongkon, Warasinee
    Phunchongharn, Phond
    2019 IEEE 10TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY (ICAST 2019), 2019, : 110 - 115
  • [28] Risk-Sensitive Reinforcement Learning for URLLC Traffic in Wireless Networks
    Ben Khalifa, Nesrine
    Assaad, Mohamad
    Debbah, Merouane
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [29] Uncertainty quantification via a memristor Bayesian deep neural network for risk-sensitive reinforcement learning
    Lin, Yudeng
    Zhang, Qingtian
    Gao, Bin
    Tang, Jianshi
    Yao, Peng
    Li, Chongxuan
    Huang, Shiyu
    Liu, Zhengwu
    Zhou, Ying
    Liu, Yuyi
    Zhang, Wenqiang
    Zhu, Jun
    Qian, He
    Wu, Huaqiang
    NATURE MACHINE INTELLIGENCE, 2023, 5 (07) : 714 - +
  • [30] Uncertainty quantification via a memristor Bayesian deep neural network for risk-sensitive reinforcement learning
    Yudeng Lin
    Qingtian Zhang
    Bin Gao
    Jianshi Tang
    Peng Yao
    Chongxuan Li
    Shiyu Huang
    Zhengwu Liu
    Ying Zhou
    Yuyi Liu
    Wenqiang Zhang
    Jun Zhu
    He Qian
    Huaqiang Wu
    Nature Machine Intelligence, 2023, 5 : 714 - 723