Multigoal Visual Navigation With Collision Avoidance via Deep Reinforcement Learning

被引:0
|
作者
Xiao, Wendong [1 ]
Yuan, Liang [1 ,2 ,3 ]
He, Li [1 ]
Ran, Teng [1 ]
Zhang, Jianbo [1 ]
Cui, Jianping [1 ]
机构
[1] Xinjiang Univ, Sch Mech Engn, Urumqi 830046, Peoples R China
[2] Beijing Univ Chem Technol, Beijing Adv Innovat Ctr Soft Matter Sci & Engn, Beijing 100029, Peoples R China
[3] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
基金
中国国家自然科学基金;
关键词
Navigation; Visualization; Task analysis; Trajectory; Collision avoidance; Reinforcement learning; Training; deep reinforcement learning (DRL); multigoal navigation; visual sensor;
D O I
10.1109/TIM.2022.3158384
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Learning to map the images acquired by a moving agent equipped with a camera sensor to motion commands for multigoal navigation is challenging. Most existing approaches are still struggling against collision avoidance, faster convergence, and generalization. In this article, a novel actor-critic architecture is presented to learn the optimal navigation policy. We introduce single-step reward observation and collision penalty to reshape the reinforcement learning (RL) reward function. The collision perception can be obtained by the reshaped reward function and treated as measurement information from the visual observation to avoid obstacles. Besides, expert trajectories are used to generate subgoals. A subgoal reward shaping is then proposed to accelerate policy learning with the expert knowledge of subgoals. In order to generate human-aware navigation policies, an observation-action consistency (OAC) model is introduced to ensure that the agent reaches the subgoals in turn, and moves toward the target. The whole training process is performed on a self-supervised RL approach, accompanied by an expert supervision signal. This method balances the exploration and exploitation, helping the proposed model to generalize to unseen goals. The training experiments on AI2-THOR show better performance and faster convergence speed, compared with the existing approaches. For the generalization capacity to unseen goals, the proposed method achieves the state-of-the-art success rate, with at least a 30% improvement of average episode collision.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Multigoal Visual Navigation With Collision Avoidance via Deep Reinforcement Learning
    Xiao, Wendong
    Yuan, Liang
    He, Li
    Ran, Teng
    Zhang, Jianbo
    Cui, Jianping
    [J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71
  • [2] Collision Anticipation via Deep Reinforcement Learning for Visual Navigation
    Gutierrez-Maestro, Eduardo
    Lopez-Sastre, Roberto J.
    Maldonado-Bascon, Saturnino
    [J]. PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I, 2020, 11867 : 386 - 397
  • [3] Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios
    Fan, Tingxiang
    Long, Pinxin
    Liu, Wenxi
    Pan, Jia
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (07): : 856 - 892
  • [4] Vision-based Distributed Multi-UAV Collision Avoidance via Deep Reinforcement Learning for Navigation
    Huang, Huaxing
    Zhu, Guijie
    Fan, Zhun
    Zhai, Hao
    Cai, Yuwei
    Shi, Ze
    Dong, Zhaohui
    Hao, Zhifeng
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 13745 - 13752
  • [5] Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
    Fei WANG
    Xiaoping ZHU
    Zhou ZHOU
    Yang TANG
    [J]. Chinese Journal of Aeronautics, 2024, 37 (03) : 237 - 257
  • [6] An Intelligent Algorithm for USVs Collision Avoidance Based on Deep Reinforcement Learning Approach with Navigation Characteristics
    Sun, Zhe
    Fan, Yunsheng
    Wang, Guofeng
    [J]. JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (04)
  • [7] Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
    Wang, Fei
    Zhu, Xiaoping
    Zhou, Zhou
    Tang, Yang
    [J]. CHINESE JOURNAL OF AERONAUTICS, 2024, 37 (03) : 237 - 257
  • [8] Deep Reinforcement Learning for Collision Avoidance of Robotic Manipulators
    Sangiovanni, Bianca
    Rendiniello, Angelo
    Incremona, Gian Paolo
    Ferrara, Antonella
    Piastra, Marco
    [J]. 2018 EUROPEAN CONTROL CONFERENCE (ECC), 2018, : 2063 - 2068
  • [9] Pedestrian Collision Avoidance Using Deep Reinforcement Learning
    Alireza Rafiei
    Amirhossein Oliaei Fasakhodi
    Farshid Hajati
    [J]. International Journal of Automotive Technology, 2022, 23 : 613 - 622
  • [10] Pedestrian Collision Avoidance Using Deep Reinforcement Learning
    Rafiei, Alireza
    Fasakhodi, Amirhossein Oliaei
    Hajati, Farshid
    [J]. INTERNATIONAL JOURNAL OF AUTOMOTIVE TECHNOLOGY, 2022, 23 (03) : 613 - 622