Adaptive Environment Modeling Based Reinforcement Learning for Collision Avoidance in Complex Scenes

被引:8
|
作者
Wang, Shuaijun [1 ,2 ]
Gao, Rui [2 ]
Han, Ruihua [2 ,3 ]
Chen, Shengduo [2 ]
Li, Chengyang [2 ]
Hao, Qi [2 ,4 ]
机构
[1] Harbin Inst Technol, Harbin, Heilongjiang, Peoples R China
[2] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Guangdong, Peoples R China
[3] Univ Hongkong, Dept Comp Sci, Hong Kong 999077, Peoples R China
[4] Southern Univ Sci & Technol, Rsearch Inst Trustworthy Autonomous Syst, Shenzhen, Guangdong, Peoples R China
关键词
D O I
10.1109/IROS47612.2022.9982107
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The major challenges of collision avoidance for robot navigation in crowded scenes lie in accurate environment modeling, fast perceptions, and trustworthy motion planning policies. This paper presents a novel adaptive environment model based collision avoidance reinforcement learning (i.e., AEMCARL) framework for an unmanned robot to achieve collision-free motions in challenging navigation scenarios. The novelty of this work is threefold: (1) developing a hierarchical network of gated-recurrent-unit (GRU) for environment modeling; (2) developing an adaptive perception mechanism with an attention module; (3) developing an adaptive reward function for the reinforcement learning (RL) framework to jointly train the environment model, perception function and motion planning policy. The proposed method is tested with the Gym-Gazebo simulator and a group of robots (Husky and Turtlebot) under various crowded scenes. Both simulation and experimental results have demonstrated the superior performance of the proposed method over baseline methods.
引用
收藏
页码:9011 / 9018
页数:8
相关论文
共 50 条
  • [31] A COLREGs-Compliant Collision Avoidance Decision Approach Based on Deep Reinforcement Learning
    Wang, Weiqiang
    Huang, Liwen
    Liu, Kezhong
    Wu, Xiaolie
    Wang, Jingyao
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2022, 10 (07)
  • [32] Soft collision avoidance based car following algorithm for autonomous driving with reinforcement learning
    Zheng, Yuqi
    Yan, Ruidong
    Jia, Bin
    Jiang, Rui
    Zheng, Shiteng
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 654
  • [33] Enhanced method for reinforcement learning based dynamic obstacle avoidance by assessment of collision risk
    Hart, Fabian
    Okhrin, Ostap
    NEUROCOMPUTING, 2024, 568
  • [34] TRANSFER REINFORCEMENT LEARNING: FEATURE TRANSFERABILITY IN SHIP COLLISION AVOIDANCE
    Wang, Xinrui
    Jin, Yan
    PROCEEDINGS OF ASME 2023 INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, IDETC-CIE2023, VOL 3B, 2023,
  • [35] A Novel Reinforcement Learning Collision Avoidance Algorithm for USVs Based on Maneuvering Characteristics and COLREGs
    Fan, Yunsheng
    Sun, Zhe
    Wang, Guofeng
    SENSORS, 2022, 22 (06)
  • [36] A novel intelligent collision avoidance algorithm based on deep reinforcement learning approach for USV
    Fan, Yunsheng
    Sun, Zhe
    Wang, Guofeng
    OCEAN ENGINEERING, 2023, 287
  • [37] A Deep Reinforcement Learning Method for Mobile Robot Collision Avoidance based on Double DQN
    Xue, Xidi
    Li, Zhan
    Zhang, Dongsheng
    Yan, Yingxin
    2019 IEEE 28TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2019, : 2131 - 2136
  • [38] Training Is Execution: A Reinforcement Learning-Based Collision Avoidance Algorithm for Volatile Scenarios
    Ban, Jian
    Li, Gongyan
    IEEE ACCESS, 2024, 12 : 116956 - 116967
  • [39] Multi-Robot Collision Avoidance with Map-based Deep Reinforcement Learning
    Yao, Shunyi
    Chen, Guangda
    Pan, Lifan
    Ma, Jun
    Ji, Jianmin
    Chen, Xiaoping
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 532 - 539
  • [40] Research on Collision Avoidance Algorithm of Unmanned Surface Vehicle Based on Deep Reinforcement Learning
    Xia, Jiawei
    Zhu, Xufang
    Liu, Zhikun
    Luo, Yasong
    Wu, Zhaodong
    Wu, Qiuhan
    IEEE SENSORS JOURNAL, 2023, 23 (11) : 11262 - 11273