Action Candidate Driven Clipped Double Q-Learning for Discrete and Continuous Action Tasks

被引:5
|
作者
Jiang, Haobo [1 ,2 ,3 ]
Li, Guangyu [1 ,2 ,3 ]
Xie, Jin [1 ,2 ,3 ]
Yang, Jian [1 ,2 ,3 ]
机构
[1] Nanjing Univ Sci & Technol, PCA Lab, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Syst High Dimens In, Minist Educ, Nanjing 210094, Peoples R China
[3] Nanjing Univ Sci & Technol, Jiangsu Key Lab Image & Video Understanding Socia, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
关键词
Q-learning; Task analysis; Benchmark testing; Approximation algorithms; Toy manufacturing industry; Markov processes; Learning systems; Clipped double Q-learning; overestimation bias; reinforcement learning; underestimation bias; REINFORCEMENT; BIAS;
D O I
10.1109/TNNLS.2022.3203024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Double Q-learning is a popular reinforcement learning algorithm in Markov decision process (MDP) problems. Clipped double Q-learning, as an effective variant of double Q-learning, employs the clipped double estimator to approximate the maximum expected action value. Due to the underestimation bias of the clipped double estimator, the performance of clipped Double Q-learning may be degraded in some stochastic environments. In this article, in order to reduce the underestimation bias, we propose an action candidate-based clipped double estimator (AC-CDE) for Double Q-learning. Specifically, we first select a set of elite action candidates with high action values from one set of estimators. Then, among these candidates, we choose the highest valued action from the other set of estimators. Finally, we use the maximum value in the second set of estimators to clip the action value of the chosen action in the first set of estimators and the clipped value is used for approximating the maximum expected action value. Theoretically, the underestimation bias in our clipped Double Q-learning decays monotonically as the number of action candidates decreases. Moreover, the number of action candidates controls the tradeoff between the overestimation and underestimation biases. In addition, we also extend our clipped Double Q-learning to continuous action tasks via approximating the elite continuous action candidates. We empirically verify that our algorithm can more accurately estimate the maximum expected action value on some toy environments and yield good performance on several benchmark problems. Code is available at https://github.com/Jiang-HB/ac_CDQ.
引用
收藏
页码:5269 / 5279
页数:11
相关论文
共 50 条
  • [31] Action decoupled SAC reinforcement learning with discrete-continuous hybrid action spaces
    Xu, Yahao
    Wei, Yiran
    Jiang, Keyang
    Chen, Li
    Wang, Di
    Deng, Hongbin
    [J]. NEUROCOMPUTING, 2023, 537 : 141 - 151
  • [32] Deep Reinforcement Learning with Double Q-Learning
    van Hasselt, Hado
    Guez, Arthur
    Silver, David
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2094 - 2100
  • [33] Learning to Play Pac-Xon with Q-Learning and Two Double Q-Learning Variants
    Schilperoort, Jits
    Mak, Ivar
    Drugan, Madalina M.
    Wiering, Marco A.
    [J]. 2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 1151 - 1158
  • [34] On the Estimation Bias in Double Q-Learning
    Ren, Zhizhou
    Zhu, Guangxiang
    Hu, Hao
    Han, Beining
    Chen, Jianglun
    Zhang, Chongjie
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [35] Motion Primitives as the Action Space of Deep Q-Learning for Planning in Autonomous Driving
    Schneider, Tristan
    Pedrosa, Matheus V. A.
    Gros, Timo P.
    Wolf, Verena
    Flasskamp, Kathrin
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [36] Scaling Up Q-Learning via Exploiting State-Action Equivalence
    Lyu, Yunlian
    Come, Aymeric
    Zhang, Yijie
    Talebi, Mohammad Sadegh
    [J]. ENTROPY, 2023, 25 (04)
  • [37] Q-Learning in Continuous State-Action Space with Noisy and Redundant Inputs by Using a Selective Desensitization Neural Network
    Kobayashi, Takaaki
    Shibuya, Takeshi
    Morita, Masahiko
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2015, 19 (06) : 825 - 832
  • [38] Continuous deep Q-learning with a simulator for stabilization of uncertain discrete-time systems
    Ikemoto, Junya
    Ushio, Toshimitsu
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2021, 12 (04): : 738 - 757
  • [39] Robust Action Gap Increasing with Clipped Advantage Learning
    Zhang, Zhe
    Gan, Yaozhong
    Tan, Xiaoyang
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9145 - 9152
  • [40] A Comparative Study of Policies in Q-Learning for Foraging Tasks
    Mohan, Yogeswaran
    Ponnambalam, S. G.
    Inayat-Hussain, Jawaid I.
    [J]. 2009 WORLD CONGRESS ON NATURE & BIOLOGICALLY INSPIRED COMPUTING (NABIC 2009), 2009, : 134 - +