DWA-RL: Dynamically Feasible Deep Reinforcement Learning Policy for Robot Navigation among Mobile Obstacles

被引:35
|
作者
Patel, Utsav [1 ]
Kumar, Nithish K. Sanjeev [1 ]
Sathyamoorthy, Adarsh Jagan [2 ]
Manocha, Dinesh [1 ]
机构
[1] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[2] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
关键词
D O I
10.1109/ICRA48506.2021.9561462
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a novel Deep Reinforcement Learning (DRL) based policy to compute dynamically feasible and spatially aware velocities for a robot navigating among mobile obstacles. Our approach combines the benefits of the Dynamic Window Approach (DWA) in terms of satisfying the robot's dynamics constraints with state-of-the-art DRL-based navigation methods that can handle moving obstacles and pedestrians well. Our formulation achieves these goals by embedding the environmental obstacles' motions in a novel low-dimensional observation space. It also uses a novel reward function to positively reinforce velocities that move the robot away from the obstacle's heading direction leading to significantly lower number of collisions. We evaluate our method in realistic 3-D simulated environments and on a real differential drive robot in challenging dense indoor scenarios with several walking pedestrians. We compare our method with state-of-the-art collision avoidance methods and observe significant improvements in terms of success rate (up to 33% increase), number of dynamics constraint violations (up to 61% decrease), and smoothness. We also conduct ablation studies to highlight the advantages of our observation space formulation, and reward structure.
引用
收藏
页码:6057 / 6063
页数:7
相关论文
共 50 条
  • [11] A novel mobile robot navigation method based on deep reinforcement learning
    Quan, Hao
    Li, Yansheng
    Zhang, Yi
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2020, 17 (03):
  • [12] A Dynamically Feasible Fast Replanning Strategy with Deep Reinforcement Learning
    Mehmet Hasanzade
    Emre Koyuncu
    [J]. Journal of Intelligent & Robotic Systems, 2021, 101
  • [13] A Dynamically Feasible Fast Replanning Strategy with Deep Reinforcement Learning
    Hasanzade, Mehmet
    Koyuncu, Emre
    [J]. Journal of Intelligent and Robotic Systems: Theory and Applications, 2021, 101 (01):
  • [14] A Dynamically Feasible Fast Replanning Strategy with Deep Reinforcement Learning
    Hasanzade, Mehmet
    Koyuncu, Emre
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2021, 101 (01)
  • [15] Asynchronous Deep Reinforcement Learning for the Mobile Robot Navigation with Supervised Auxiliary Tasks
    Tongloy, T.
    Chuwongin, S.
    Jaksukam, K.
    Chousangsuntorn, C.
    Boonsang, S.
    [J]. 2017 2ND INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ENGINEERING (ICRAE), 2017, : 68 - 72
  • [16] A Behavior-Based Mobile Robot Navigation Method with Deep Reinforcement Learning
    Li, Juncheng
    Ran, Maopeng
    Wang, Han
    Xie, Lihua
    [J]. UNMANNED SYSTEMS, 2021, 9 (03) : 201 - 209
  • [17] CBNAV: Costmap Based Approach to Deep Reinforcement Learning Mobile Robot Navigation
    Tomasi Junior, Darci Luiz
    Todt, Eduardo
    [J]. 2021 LATIN AMERICAN ROBOTICS SYMPOSIUM / 2021 BRAZILIAN SYMPOSIUM ON ROBOTICS / 2021 WORKSHOP OF ROBOTICS IN EDUCATION (LARS-SBR-WRE 2021), 2021, : 324 - 329
  • [18] Sensor-based Mobile Robot Navigation via Deep Reinforcement Learning
    Han, Seungho-Ho
    Choi, Ho-Jin
    Benz, Philipp
    Loaiciga, Jorge
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2018, : 147 - 154
  • [19] Autonomous Navigation by Mobile Robot with Sensor Fusion Based on Deep Reinforcement Learning
    Ou, Yang
    Cai, Yiyi
    Sun, Youming
    Qin, Tuanfa
    [J]. SENSORS, 2024, 24 (12)
  • [20] Deep Reinforcement Learning for Vision-Based Navigation of UAVs in Avoiding Stationary and Mobile Obstacles
    Kalidas, Amudhini P.
    Joshua, Christy Jackson
    Md, Abdul Quadir
    Basheer, Shakila
    Mohan, Senthilkumar
    Sakri, Sapiah
    [J]. DRONES, 2023, 7 (04)