Action Candidate Driven Clipped Double Q-Learning for Discrete and Continuous Action Tasks

被引:5
|
作者
Jiang, Haobo [1 ,2 ,3 ]
Li, Guangyu [1 ,2 ,3 ]
Xie, Jin [1 ,2 ,3 ]
Yang, Jian [1 ,2 ,3 ]
机构
[1] Nanjing Univ Sci & Technol, PCA Lab, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Syst High Dimens In, Minist Educ, Nanjing 210094, Peoples R China
[3] Nanjing Univ Sci & Technol, Jiangsu Key Lab Image & Video Understanding Socia, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
关键词
Q-learning; Task analysis; Benchmark testing; Approximation algorithms; Toy manufacturing industry; Markov processes; Learning systems; Clipped double Q-learning; overestimation bias; reinforcement learning; underestimation bias; REINFORCEMENT; BIAS;
D O I
10.1109/TNNLS.2022.3203024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Double Q-learning is a popular reinforcement learning algorithm in Markov decision process (MDP) problems. Clipped double Q-learning, as an effective variant of double Q-learning, employs the clipped double estimator to approximate the maximum expected action value. Due to the underestimation bias of the clipped double estimator, the performance of clipped Double Q-learning may be degraded in some stochastic environments. In this article, in order to reduce the underestimation bias, we propose an action candidate-based clipped double estimator (AC-CDE) for Double Q-learning. Specifically, we first select a set of elite action candidates with high action values from one set of estimators. Then, among these candidates, we choose the highest valued action from the other set of estimators. Finally, we use the maximum value in the second set of estimators to clip the action value of the chosen action in the first set of estimators and the clipped value is used for approximating the maximum expected action value. Theoretically, the underestimation bias in our clipped Double Q-learning decays monotonically as the number of action candidates decreases. Moreover, the number of action candidates controls the tradeoff between the overestimation and underestimation biases. In addition, we also extend our clipped Double Q-learning to continuous action tasks via approximating the elite continuous action candidates. We empirically verify that our algorithm can more accurately estimate the maximum expected action value on some toy environments and yield good performance on several benchmark problems. Code is available at https://github.com/Jiang-HB/ac_CDQ.
引用
收藏
页码:5269 / 5279
页数:11
相关论文
共 50 条
  • [21] QoS-Aware Load Balancing in Wireless Networks using Clipped Double Q-Learning
    Iturria-Rivera, Pedro Enrique
    Erol-Kantarci, Melike
    [J]. 2021 IEEE 18TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2021), 2021, : 10 - 16
  • [22] Q-ADER: An Effective Q-Learning for Recommendation With Diminishing Action Space
    Li, Fan
    Qu, Hong
    Zhang, Liyan
    Fu, Mingsheng
    Chen, Wenyu
    Yi, Zhang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [23] q-Learning in Continuous Time
    Jia, Yanwei
    Zhou, Xun Yu
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [24] Q-learning based on neural network in learning action selection of mobile robot
    Qiao, Junfei
    Hou, Zhanjun
    Ruan, Xiaogang
    [J]. 2007 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS, VOLS 1-6, 2007, : 263 - 267
  • [25] Double Gumbel Q-Learning
    Hui, David Yu-Tung
    Courville, Aaron
    Bacon, Pierre-Luc
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [26] Weighted Double Q-learning
    Zhang, Zongzhang
    Pan, Zhiyuan
    Kochenderfer, Mykel J.
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3455 - 3461
  • [27] Convergence of Multiagent Q-learning: Multi Action Replay Process Approach
    Kim, Han-Eol
    Ahn, Hyo-Sung
    [J]. 2010 IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT CONTROL, 2010, : 789 - 794
  • [28] Q-Learning Lagrange Policies for Multi-Action Restless Bandits
    Killian, Jackson A.
    Biswas, Arpita
    Shah, Sanket
    Tambe, Milind
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 871 - 881
  • [29] Greedy action selection and pessimistic Q-value updates in cooperative Q-learning
    Kujirai, Toshihiro
    Yokota, Takayoshi
    [J]. 2018 57TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE), 2018, : 821 - 826
  • [30] Q-learning in Continuous State-Action Space with Redundant Dimensions by Using a Selective Desensitization Neural Network
    Kobayashi, Takaaki
    Shibuya, Takeshi
    Morita, Masahiko
    [J]. 2014 JOINT 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 15TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2014, : 801 - 806