High-accuracy model-based reinforcement learning, a survey

被引:0
|
作者
Aske Plaat
Walter Kosters
Mike Preuss
机构
[1] Leiden University,Computer Science
来源
关键词
Model-based reinforcement learning; Latent models; Deep learning; Machine learning; Planning;
D O I
暂无
中图分类号
学科分类号
摘要
Deep reinforcement learning has shown remarkable success in the past few years. Highly complex sequential decision making problems from game playing and robotics have been solved with deep model-free methods. Unfortunately, the sample complexity of model-free methods is often high. Model-based reinforcement learning, in contrast, can reduce the number of environment samples, by learning an explicit internal model of the environment dynamics. However, achieving good model accuracy in high dimensional problems is challenging. In recent years, a diverse landscape of model-based methods has been introduced to improve model accuracy, using methods such as probabilistic inference, model-predictive control, latent models, and end-to-end learning and planning. Some of these methods succeed in achieving high accuracy at low sample complexity in typical benchmark applications. In this paper, we survey these methods; we explain how they work and what their strengths and weaknesses are. We conclude with a research agenda for future work to make the methods more robust and applicable to a wider range of applications.
引用
收藏
页码:9541 / 9573
页数:32
相关论文
共 50 条
  • [1] High-accuracy model-based reinforcement learning, a survey
    Plaat, Aske
    Kosters, Walter
    Preuss, Mike
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (09) : 9541 - 9573
  • [2] A survey on model-based reinforcement learning
    Fan-Ming Luo
    Tian Xu
    Hang Lai
    Xiong-Hui Chen
    Weinan Zhang
    Yang Yu
    [J]. Science China Information Sciences, 2024, 67
  • [3] A survey on model-based reinforcement learning
    Fan-Ming LUO
    Tian XU
    Hang LAI
    Xiong-Hui CHEN
    Weinan ZHANG
    Yang YU
    [J]. Science China(Information Sciences), 2024, 67 (02) : 59 - 84
  • [4] Model-based Reinforcement Learning: A Survey
    Moerland, Thomas M.
    Broekens, Joost
    Plaat, Aske
    Jonker, Catholijn M.
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (01): : 1 - 118
  • [5] A survey on model-based reinforcement learning
    Luo, Fan-Ming
    Xu, Tian
    Lai, Hang
    Chen, Xiong-Hui
    Zhang, Weinan
    Yu, Yang
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (02)
  • [6] Model-Based Reinforcement Learning in Robotics: A Survey
    Sun, Shiguang
    Lan, Xuguang
    Zhang, Hanbo
    Zheng, Nanning
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2022, 35 (01): : 1 - 16
  • [7] A Brief Survey of Model-Based Reinforcement Learning Techniques
    Pal, Constantin-Valentin
    Leon, Florin
    [J]. 2020 24TH INTERNATIONAL CONFERENCE ON SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2020, : 92 - 97
  • [8] Survey of Model-Based Reinforcement Learning: Applications on Robotics
    Polydoros, Athanasios S.
    Nalpantidis, Lazaros
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2017, 86 (02) : 153 - 173
  • [9] Survey of Model-Based Reinforcement Learning: Applications on Robotics
    Athanasios S. Polydoros
    Lazaros Nalpantidis
    [J]. Journal of Intelligent & Robotic Systems, 2017, 86 : 153 - 173
  • [10] High-Accuracy, Model-Based Near-Field Beam Shaping
    Dorrer, C.
    Hassett, J.
    [J]. 2017 CONFERENCE ON LASERS AND ELECTRO-OPTICS (CLEO), 2017,