Dyna-Validator: A Model-based Reinforcement Learning Method with Validated Simulated Experiences

被引:0
|
作者
Zhang, Hengsheng [1 ,2 ]
Li, Jingchen [1 ]
He, Ziming [1 ]
Zhu, Jinhui [1 ]
Shi, Haobin [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[2] China Elect Technol Grp Corp, Res Inst 20, Xian 710018, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Model-based reinforcement learning (MBRL); Dyna; Simulated annealing; GO;
D O I
10.15837/ijccc.2023.5.5073
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dyna is a planning paradigm that naturally weaves learning and planning together through environmental models. Dyna-style reinforcement learning improves the sample efficiency using the simulation experience generated by the environment model to update the value function. However, the existing Dyna-style planning methods are usually based on tabular methods, only suitable for tasks with low-dimensional and small-scale space. In addition, the quality of the simulation experience generated by the existing methods cannot be guaranteed, which significantly limits its application in tasks such as continuous control of high-dimensional robots and autonomous driving. To this end, we propose a model-based approach that controls planning through a validator. The validator filters high-quality experiences for policy learning and decides whether to stop planning. To deal with the exploration and exploitation dilemma in reinforcement learning, a combination of & epsilon;-greedy strategy and simulated annealing (SA) cooling schedule control is designed as an action selection strategy. The excellent performance of the proposed method is demonstrated in a set of classical Atari games. Experimental results show that learning dynamic models in some games can improve sample efficiency. This benefit is maximized by choosing the proper planning steps. In the optimization planning phase, our method maintains a smaller gap with the current state-of-the-art model-based reinforcement learning (MuZero). In order to achieve a good compromise between model accuracy and optimal programming step size, it is necessary to control the programming reasonably. The practical application of this method in a physical robot system helps reduce the influence of an imprecise depth prediction model on the task. Without human supervision, it is easier to collect training data and learn complex skills (such as grabbing and carrying items) while being more effective at scaling tasks that have never been seen before.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Intelligent Trainer for Dyna-Style Model-Based Deep Reinforcement Learning
    Dong, Linsen
    Li, Yuanlong
    Zhou, Xin
    Wen, Yonggang
    Guan, Kyle
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2758 - 2771
  • [2] Dyna-style Model-based reinforcement learning with Model-Free Policy Optimization
    Dong, Kun
    Luo, Yongle
    Wang, Yuxin
    Liu, Yu
    Qu, Chengeng
    Zhang, Qiang
    Cheng, Erkang
    Sun, Zhiyong
    Song, Bo
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 287
  • [3] Model-based Indirect Learning method based on Dyna-Q architecture
    Hwang, Kao-Shing
    Jiang, Wei-Cheng
    Chen, Yu-Jen
    Wang, Wei-Han
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, : 2540 - 2544
  • [4] Model-Based Reinforcement Learning Method for Microgrid Optimization Scheduling
    Yao, Jinke
    Xu, Jiachen
    Zhang, Ning
    Guan, Yajuan
    [J]. SUSTAINABILITY, 2023, 15 (12)
  • [5] Physics-informed Dyna-style model-based deep reinforcement learning for dynamic control
    Liu, Xin-Yang
    Wang, Jian-Xun
    [J]. PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2021, 477 (2255):
  • [6] A survey on model-based reinforcement learning
    Fan-Ming Luo
    Tian Xu
    Hang Lai
    Xiong-Hui Chen
    Weinan Zhang
    Yang Yu
    [J]. Science China Information Sciences, 2024, 67
  • [7] The ubiquity of model-based reinforcement learning
    Doll, Bradley B.
    Simon, Dylan A.
    Daw, Nathaniel D.
    [J]. CURRENT OPINION IN NEUROBIOLOGY, 2012, 22 (06) : 1075 - 1081
  • [8] Model-based Reinforcement Learning: A Survey
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (01): : 1 - 118
  • [9] A survey on model-based reinforcement learning
    Fan-Ming LUO
    Tian XU
    Hang LAI
    Xiong-Hui CHEN
    Weinan ZHANG
    Yang YU
    [J]. Science China(Information Sciences), 2024, 67 (02) : 59 - 84
  • [10] Nonparametric model-based reinforcement learning
    Atkeson, CG
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 10, 1998, 10 : 1008 - 1014