Model-based reinforcement learning with model error and its application

被引:0
|
作者
Tajima, Yoshiyuki [1 ]
Onisawa, Takehisa [1 ]
机构
[1] Univ Tsukuba, Tsukuba, Ibaraki, Japan
关键词
reinforcement learning; model-based reinforcement learning; agent; robot;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a Reinforcement Learning (RL) algorithm called Model Error based Forward Planning Reinforcement Learning (ME-FPRL). In this algorithm, an agent controls the amount of learning by using the model error. This study applies ME-FPRL to the pursuit of a target by a robot camera. The results of this application show that ME-FPRL is more efficient than Usual RL and Model-based RL.
引用
收藏
页码:1333 / 1336
页数:4
相关论文
共 50 条
  • [41] Model-based and Model-free Reinforcement Learning for Visual Servoing
    Farahmand, Amir Massoud
    Shademan, Azad
    Jagersand, Martin
    Szepesvari, Csaba
    ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-7, 2009, : 4135 - 4142
  • [42] When to Update Your Model: Constrained Model-based Reinforcement Learning
    Ji, Tianying
    Luo, Yu
    Sun, Fuchun
    Jing, Mingxuan
    He, Fengxiang
    Huang, Wenbing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [43] Bootstrap Estimated Uncertainty of the Environment Model for Model-Based Reinforcement Learning
    Huang, Wenzhen
    Zhang, Junge
    Huang, Kaiqi
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3870 - 3877
  • [44] Reward Shaping for Model-Based Bayesian Reinforcement Learning
    Kim, Hyeoneun
    Lim, Woosang
    Lee, Kanghoon
    Noh, Yung-Kyun
    Kim, Kee-Eung
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 3548 - 3555
  • [45] Model-based Adversarial Meta-Reinforcement Learning
    Lin, Zichuan
    Thomas, Garrett
    Yang, Guangwen
    Ma, Tengyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [46] On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
    Zhang, Baohe
    Rajan, Raghu
    Pineda, Luis
    Lambert, Nathan
    Biedenkapp, Andre
    Chua, Kurtland
    Hutter, Frank
    Calandra, Roberto
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [47] Model-based reinforcement learning for approximate optimal regulation
    Kamalapurkar, Rushikesh
    Walters, Patrick
    Dixon, Warren E.
    AUTOMATICA, 2016, 64 : 94 - 104
  • [48] The Value Equivalence Principle for Model-Based Reinforcement Learning
    Grimm, Christopher
    Barreto, Andre
    Singh, Satinder
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [49] Continuous-Time Model-Based Reinforcement Learning
    Yildiz, Cagatay
    Heinonen, Markus
    Lahdesmaki, Harri
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [50] Model-based Bayesian Reinforcement Learning for Dialogue Management
    Lison, Pierre
    14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 475 - 479