A survey on deep learning and deep reinforcement learning in robotics with a tutorial on deep reinforcement learning

被引:35
|
作者
Morales, Eduardo F. [1 ,2 ]
Murrieta-Cid, Rafael [1 ]
Becerra, Israel [1 ,3 ]
Esquivel-Basaldua, Marco A. [1 ]
机构
[1] Ctr Invest Matemat CIMAT, Guanajuato, Mexico
[2] Inst Nacl Astrofis Opt & Elect INAOE, Tonantzintla, Mexico
[3] Consejo Nacl Ciencia & Tecnol CONACyT, Mexico City, DF, Mexico
关键词
Deep learning; Deep reinforcement learning; Mobile robotics; Mobile robot navigation; Motion planning; Mobile manipulation; END-TO-END; REACHABILITY;
D O I
10.1007/s11370-021-00398-z
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This article is about deep learning (DL) and deep reinforcement learning (DRL) works applied to robotics. Both tools have been shown to be successful in delivering data-driven solutions for robotics tasks, as well as providing a natural way to develop an end-to-end pipeline from the robot's sensing to its actuation, passing through the generation of a policy to perform the given task. These frameworks have been proven to be able to deal with real-world complications such as noise in sensing, imprecise actuation, variability in the scenarios where the robot is being deployed, among others. Following that vein, and given the growing interest in DL and DRL, the present work starts by providing a brief tutorial on deep reinforcement learning, where the goal is to understand the main concepts and approaches followed in the field. Later, the article describes the main, recent, and most promising approaches of DL and DRL in robotics, with sufficient technical detail to understand the core of the works and to motivate interested readers to initiate their own research in the area. Then, to provide a comparative analysis, we present several taxonomies in which the references can be classified, according to high-level features, the task that the work addresses, the type of system, and the learning techniques used in the work. We conclude by presenting promising research directions in both DL and DRL.
引用
收藏
页码:773 / 805
页数:33
相关论文
共 50 条
  • [1] A survey on deep learning and deep reinforcement learning in robotics with a tutorial on deep reinforcement learning
    Eduardo F. Morales
    Rafael Murrieta-Cid
    Israel Becerra
    Marco A. Esquivel-Basaldua
    [J]. Intelligent Service Robotics, 2021, 14 : 773 - 805
  • [2] A Survey on Deep Reinforcement Learning
    [J]. 2018, Science Press (41):
  • [3] Deep reinforcement learning: a survey
    Wang, Hao-nan
    Liu, Ning
    Zhang, Yi-yun
    Feng, Da-wei
    Huang, Feng
    Li, Dong-sheng
    Zhang, Yi-ming
    [J]. FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2020, 21 (12) : 1726 - 1744
  • [4] Deep reinforcement learning: a survey
    Hao-nan Wang
    Ning Liu
    Yi-yun Zhang
    Da-wei Feng
    Feng Huang
    Dong-sheng Li
    Yi-ming Zhang
    [J]. Frontiers of Information Technology & Electronic Engineering, 2020, 21 : 1726 - 1744
  • [5] Deep Reinforcement Learning: A Survey
    Wang, Xu
    Wang, Sen
    Liang, Xingxing
    Zhao, Dawei
    Huang, Jincai
    Xu, Xin
    Dai, Bin
    Miao, Qiguang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 5064 - 5078
  • [6] Transfer Learning in Deep Reinforcement Learning: A Survey
    Zhu, Zhuangdi
    Lin, Kaixiang
    Jain, Anil K.
    Zhou, Jiayu
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13344 - 13362
  • [7] The Advance of Reinforcement Learning and Deep Reinforcement Learning
    Lyu, Le
    Shen, Yang
    Zhang, Sicheng
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, : 644 - 648
  • [8] Exploration in deep reinforcement learning: A survey
    Ladosz, Pawel
    Weng, Lilian
    Kim, Minwoo
    Oh, Hyondong
    [J]. INFORMATION FUSION, 2022, 85 : 1 - 22
  • [9] Deep Reinforcement Learning Verification: A Survey
    Landers, Matthew
    Doryab, Afsaneh
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (14S)
  • [10] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38