DWA-RL: Dynamically Feasible Deep Reinforcement Learning Policy for Robot Navigation among Mobile Obstacles

被引:35
|
作者
Patel, Utsav [1 ]
Kumar, Nithish K. Sanjeev [1 ]
Sathyamoorthy, Adarsh Jagan [2 ]
Manocha, Dinesh [1 ]
机构
[1] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[2] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
关键词
D O I
10.1109/ICRA48506.2021.9561462
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a novel Deep Reinforcement Learning (DRL) based policy to compute dynamically feasible and spatially aware velocities for a robot navigating among mobile obstacles. Our approach combines the benefits of the Dynamic Window Approach (DWA) in terms of satisfying the robot's dynamics constraints with state-of-the-art DRL-based navigation methods that can handle moving obstacles and pedestrians well. Our formulation achieves these goals by embedding the environmental obstacles' motions in a novel low-dimensional observation space. It also uses a novel reward function to positively reinforce velocities that move the robot away from the obstacle's heading direction leading to significantly lower number of collisions. We evaluate our method in realistic 3-D simulated environments and on a real differential drive robot in challenging dense indoor scenarios with several walking pedestrians. We compare our method with state-of-the-art collision avoidance methods and observe significant improvements in terms of success rate (up to 33% increase), number of dynamics constraint violations (up to 61% decrease), and smoothness. We also conduct ablation studies to highlight the advantages of our observation space formulation, and reward structure.
引用
收藏
页码:6057 / 6063
页数:7
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Mobile Robot Navigation
    Gromniak, Martin
    Stenzel, Jonas
    [J]. 2019 4TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2019), 2019, : 68 - 73
  • [2] Mobile Robot Navigation Using Deep Reinforcement Learning
    Lee, Min-Fan Ricky
    Yusuf, Sharfiden Hassen
    [J]. PROCESSES, 2022, 10 (12)
  • [3] Mobile Robot Navigation based on Deep Reinforcement Learning
    Ruan, Xiaogang
    Ren, Dingqi
    Zhu, Xiaoqing
    Huang, Jing
    [J]. PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 6174 - 6178
  • [4] Deep Reinforcement Learning Based Mobile Robot Navigation: A Review
    Zhu, Kai
    Zhang, Tao
    [J]. TSINGHUA SCIENCE AND TECHNOLOGY, 2021, 26 (05) : 674 - 691
  • [5] Deep Reinforcement Learning Based Mobile Robot Navigation:A Review
    Kai Zhu
    Tao Zhang
    [J]. Tsinghua Science and Technology, 2021, 26 (05) : 674 - 691
  • [6] A Brief Survey: Deep Reinforcement Learning in Mobile Robot Navigation
    Jiang, Haoge
    Wang, Han
    Yau, Wei-Yun
    Wan, Kong-Wah
    [J]. PROCEEDINGS OF THE 15TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2020), 2020, : 592 - 597
  • [7] Continuous Control with Deep Reinforcement Learning for Mobile Robot Navigation
    Xiang, Jiaqi
    Li, Qingdong
    Dong, Xiwang
    Ren, Zhang
    [J]. 2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 1501 - 1506
  • [8] Experimental Research on Deep Reinforcement Learning in Autonomous navigation of Mobile Robot
    Yue, Pengyu
    Xin, Jing
    Zhao, Huan
    Liu, Ding
    Shan, Mao
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 2019 14TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2019), 2019, : 1612 - 1616
  • [9] Deep Reinforcement Learning Based Mobile Robot Navigation in Crowd Environments
    Yang, Guang
    Guo, Yi
    [J]. 2024 21ST INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR 2024, 2024, : 513 - 519
  • [10] Navigation method for mobile robot based on hierarchical deep reinforcement learning
    Wang, Tong
    Li, Ao
    Song, Hai-Luo
    Liu, Wei
    Wang, Ming-Hui
    [J]. Kongzhi yu Juece/Control and Decision, 2022, 37 (11): : 2799 - 2807