Robust Risk-Aware Reinforcement Learning

被引:10
|
作者
Jaimungal, Sebastian [1 ]
Pesenti, Silvana M. [1 ]
Wang, Ye Sheng [1 ]
Tatsat, Hariom [2 ]
机构
[1] Univ Toronto, Dept Stat Sci, Toronto, ON M5G 1Z5, Canada
[2] Barclays Capital, New York, NY 10020 USA
来源
SIAM JOURNAL ON FINANCIAL MATHEMATICS | 2022年 / 13卷 / 01期
基金
加拿大自然科学与工程研究理事会;
关键词
robust optimization; reinforcement learning; risk measures; Wasserstein distance; statistical arbitrage; portfolio optimization; CHOICE;
D O I
10.1137/21M144640X
中图分类号
F8 [财政、金融];
学科分类号
0202 ;
摘要
We present a reinforcement learning (RL) approach for robust optimization of risk-aware performance criteria. To allow agents to express a wide variety of risk-reward profiles, we assess the value of a policy using rank dependent expected utility (RDEU). RDEU allows agents to seek gains, while simultaneously protecting themselves against downside risk. To robustify optimal policies against model uncertainty, we assess a policy not by its distribution but rather by the worst possible distribution that lies within a Wasserstein ball around it. Thus, our problem formulation may be viewed as an actor/agent choosing a policy (the outer problem) and the adversary then acting to worsen the performance of that strategy (the inner problem). We develop explicit policy gradient formulae for the inner and outer problems and show their efficacy on three prototypical financial problems: robust portfolio allocation, benchmark optimization, and statistical arbitrage.
引用
收藏
页码:213 / 226
页数:14
相关论文
共 50 条
  • [1] Risk-Aware Reinforcement Learning Based Federated Learning Framework for IoV
    Chen, Yuhan
    Liu, Zhibo
    Lu, Xiaozhen
    Xiao, Liang
    [J]. 2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [2] Risk-Aware Deep Reinforcement Learning for Robot Crowd Navigation
    Sun, Xueying
    Zhang, Qiang
    Wei, Yifei
    Liu, Mingmin
    [J]. ELECTRONICS, 2023, 12 (23)
  • [3] Risk-Aware Transfer in Reinforcement Learning using Successor Features
    Gimelfarb, Michael
    Barreto, Andre
    Sanner, Scott
    Lee, Chi-Guhn
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Risk-Aware Reinforcement Learning for Multi-Period Portfolio Selection
    Winkel, David
    Strauss, Niklas
    Schubert, Matthias
    Seidl, Thomas
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT VI, 2023, 13718 : 185 - 200
  • [5] Risk-Aware Federated Reinforcement Learning-Based Secure IoV Communications
    Lu, Xiaozhen
    Xiao, Liang
    Xiao, Yilin
    Wang, Wei
    Qi, Nan
    Wang, Qian
    [J]. IEEE Transactions on Mobile Computing, 2024, 23 (12) : 14656 - 14671
  • [6] Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation
    Triest, Samuel
    Castro, Mateo Guaman
    Maheshwari, Parv
    Sivaprakasam, Matthew
    Wang, Wenshan
    Scherer, Sebastian
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 924 - 930
  • [7] DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture Fleeting Intraday Trading Opportunities
    Sun, Shuo
    Xue, Wanqi
    Wang, Rundong
    He, Xu
    Zhu, Junlei
    Li, Jian
    An, Bo
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1858 - 1867
  • [8] Monte Carlo tree search algorithms for risk-aware and multi-objective reinforcement learning
    Hayes, Conor F.
    Reymond, Mathieu
    Roijers, Diederik M.
    Howley, Enda
    Mannion, Patrick
    [J]. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2023, 37 (02)
  • [9] ARSL-V: A risk-aware relay selection scheme using reinforcement learning in VANETs
    Liu, Xuejiao
    Wang, Chuanhua
    Huang, Lingfeng
    Xia, Yingjie
    [J]. PEER-TO-PEER NETWORKING AND APPLICATIONS, 2024, 17 (03) : 1750 - 1767
  • [10] Robust Beamforming for Massive MIMO LEO Satellite Communications: A Risk-Aware Learning Framework
    Alsenwi, Madyan
    Lagunas, Eva
    Chatzinotas, Symeon
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (05) : 6560 - 6571