Reinforcement Learning for Constrained Markov Decision Processes

被引:0
|
作者
Gattami, Ather [1 ]
Bai, Qinbo [2 ]
Aggarwal, Vaneet [2 ]
机构
[1] AI Sweden, Stockholm, Sweden
[2] Purdue Univ, W Lafayette, IN 47907 USA
关键词
ALGORITHM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we consider the problem of optimization and learning for constrained and multi-objective Markov decision processes, for both discounted rewards and expected average rewards. We formulate the problems as zero-sum games where one player (the agent) solves a Markov decision problem and its opponent solves a bandit optimization problem, which we here call Markov-Bandit games. We extend Q-learning to solve Markov-Bandit games and show that our new Q-learning algorithms converge to the optimal solutions of the zero-sum Markov-Bandit games, and hence converge to the optimal solutions of the constrained and multi-objective Markov decision problems. We provide numerical examples where we calculate the optimal policies and show by simulations that the algorithm converges to the calculated optimal policies. To the best of our knowledge, this is the first time Q-learning algorithms guarantee convergence to optimal stationary policies for the multi-objective Reinforcement Learning problem with discounted and expected average rewards, respectively.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes
    Brazdil, Tomas
    Chatterjee, Krishnendu
    Novotny, Petr
    Vahala, Jiri
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9794 - 9801
  • [2] Learning in Constrained Markov Decision Processes
    Singh, Rahul
    Gupta, Abhishek
    Shroff, Ness B.
    [J]. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2023, 10 (01): : 441 - 453
  • [3] Semi-Infinitely Constrained Markov Decision Processes and Provably Efficient Reinforcement Learning
    Zhang, Liangyu
    Peng, Yang
    Yang, Wenhao
    Zhang, Zhihua
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3722 - 3735
  • [4] Reinforcement Learning in Robust Markov Decision Processes
    Lim, Shiau Hong
    Xu, Huan
    Mannor, Shie
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2016, 41 (04) : 1325 - 1353
  • [5] Model-free Safe Reinforcement Learning Method Based on Constrained Markov Decision Processes
    Zhu, Fei
    Ge, Yang-Yang
    Ling, Xing-Hong
    Liu, Quan
    [J]. Ruan Jian Xue Bao/Journal of Software, 2022, 33 (08): : 3086 - 3102
  • [6] A Sublinear-Regret Reinforcement Learning Algorithm on Constrained Markov Decision Processes with reset action
    Watanabe, Takashi
    Sakuragawa, Takashi
    [J]. ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 51 - 55
  • [7] A reinforcement learning based algorithm for Markov decision processes
    Bhatnagar, S
    Kumar, S
    [J]. 2005 International Conference on Intelligent Sensing and Information Processing, Proceedings, 2005, : 199 - 204
  • [8] A sensitivity view of Markov decision processes and reinforcement learning
    Cao, XR
    [J]. MODELING, CONTROL AND OPTIMIZATION OF COMPLEX SYSTEMS: IN HONOR OF PROFESSOR YU-CHI HO, 2003, 14 : 261 - 283
  • [9] Model-Based Reinforcement Learning for Infinite-Horizon Discounted Constrained Markov Decision Processes
    HasanzadeZonuzy, Aria
    Kalathil, Dileep
    Shakkottai, Srinivas
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2519 - 2525
  • [10] On constrained Markov decision processes
    Department of Econometrics, University of Sydney, Sydney, NSW 2006, Australia
    不详
    [J]. Oper Res Lett, 1 (25-28):