Mobile Robot Navigation Using Deep Reinforcement Learning

被引:16
|
作者
Lee, Min-Fan Ricky [1 ,2 ]
Yusuf, Sharfiden Hassen [1 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Grad Inst Automat & Control, Taipei 106335, Taiwan
[2] Natl Taiwan Univ Sci & Technol, Ctr Cyber Phys Syst Innovat, Taipei 106335, Taiwan
关键词
autonomous navigation; collision avoidance; reinforcement learning; mobile robots; LOCALIZATION; SLAM;
D O I
10.3390/pr10122748
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Learning how to navigate autonomously in an unknown indoor environment without colliding with static and dynamic obstacles is important for mobile robots. The conventional mobile robot navigation system does not have the ability to learn autonomously. Unlike conventional approaches, this paper proposes an end-to-end approach that uses deep reinforcement learning for autonomous mobile robot navigation in an unknown environment. Two types of deep Q-learning agents, such as deep Q-network and double deep Q-network agents are proposed to enable the mobile robot to autonomously learn about collision avoidance and navigation capabilities in an unknown environment. For autonomous mobile robot navigation in an unknown environment, the process of detecting the target object is first carried out using a deep neural network model, and then the process of navigation to the target object is followed using the deep Q-network or double deep Q-network algorithm. The simulation results show that the mobile robot can autonomously navigate, recognize, and reach the target object location in an unknown environment without colliding with static and dynamic obstacles. Similar results are obtained in real-world experiments, but only with static obstacles. The DDQN agent outperforms the DQN agent in reaching the target object location in the test simulation by 5.06%.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Mobile Robot Navigation
    Gromniak, Martin
    Stenzel, Jonas
    [J]. 2019 4TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2019), 2019, : 68 - 73
  • [2] Mobile Robot Navigation based on Deep Reinforcement Learning
    Ruan, Xiaogang
    Ren, Dingqi
    Zhu, Xiaoqing
    Huang, Jing
    [J]. PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 6174 - 6178
  • [3] Deep Reinforcement Learning Based Mobile Robot Navigation: A Review
    Zhu, Kai
    Zhang, Tao
    [J]. TSINGHUA SCIENCE AND TECHNOLOGY, 2021, 26 (05) : 674 - 691
  • [4] Deep Reinforcement Learning Based Mobile Robot Navigation:A Review
    Kai Zhu
    Tao Zhang
    [J]. Tsinghua Science and Technology, 2021, 26 (05) : 674 - 691
  • [5] A Brief Survey: Deep Reinforcement Learning in Mobile Robot Navigation
    Jiang, Haoge
    Wang, Han
    Yau, Wei-Yun
    Wan, Kong-Wah
    [J]. PROCEEDINGS OF THE 15TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2020), 2020, : 592 - 597
  • [6] Continuous Control with Deep Reinforcement Learning for Mobile Robot Navigation
    Xiang, Jiaqi
    Li, Qingdong
    Dong, Xiwang
    Ren, Zhang
    [J]. 2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 1501 - 1506
  • [7] Experimental Research on Deep Reinforcement Learning in Autonomous navigation of Mobile Robot
    Yue, Pengyu
    Xin, Jing
    Zhao, Huan
    Liu, Ding
    Shan, Mao
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 2019 14TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2019), 2019, : 1612 - 1616
  • [8] Navigation method for mobile robot based on hierarchical deep reinforcement learning
    Wang, Tong
    Li, Ao
    Song, Hai-Luo
    Liu, Wei
    Wang, Ming-Hui
    [J]. Kongzhi yu Juece/Control and Decision, 2022, 37 (11): : 2799 - 2807
  • [9] Deep Reinforcement Learning Based Mobile Robot Navigation in Crowd Environments
    Yang, Guang
    Guo, Yi
    [J]. 2024 21ST INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR 2024, 2024, : 513 - 519
  • [10] A novel mobile robot navigation method based on deep reinforcement learning
    Quan, Hao
    Li, Yansheng
    Zhang, Yi
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2020, 17 (03):