Reinforcement imitation learning for reliable and efficient autonomous navigation in complex environments

被引:0
|
作者
Kumar D. [1 ]
机构
[1] Computer Science and Engineering, United College of Engineering and Research, Uttar Pradesh, Naini, Prayagraj
关键词
Autonomous navigation; Deep neural networks; Dynamic environments; Imitation learning; Q-learning; Reinforcement learning;
D O I
10.1007/s00521-024-09678-y
中图分类号
学科分类号
摘要
Reinforcement learning (RL) and imitation learning (IL) are quite two useful machine learning techniques that were shown to be potential in enhancing navigation performance. Basically, both of these methods try to find a policy decision function in a reinforcement learning fashion or through imitation. In this paper, we propose a novel algorithm named Reinforcement Imitation Learning (RIL) that naturally combines RL and IL together in accelerating more reliable and efficient navigation in dynamic environments. RIL is a hybrid approach that utilizes RL for policy optimization and IL as some kind of learning from expert demonstrations with the inclusion of guidance. We present the comparison of the convergence of RIL with conventional RL and IL to provide the support for our algorithm’s performance in a dynamic environment with moving obstacles. The results of the testing indicate that the RIL algorithm has better collision avoidance and navigation efficiency than traditional methods. The proposed RIL algorithm has broad application prospects in many specific areas such as an autonomous driving, unmanned aerial vehicles, and robots. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:11945 / 11961
页数:16
相关论文
共 50 条
  • [1] Sample Efficient Reinforcement Learning for Navigation in Complex Environments
    Moridian, Barzin
    Page, Brian R.
    Mahmoudian, Nina
    2019 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2019, : 15 - 21
  • [2] Cooperative Deep Reinforcement Learning Policies for Autonomous Navigation in Complex Environments
    Tran, Van Manh
    Kim, Gon-Woo
    IEEE ACCESS, 2024, 12 : 101053 - 101065
  • [3] Deep Imitation Learning for Autonomous Navigation in Dynamic Pedestrian Environments
    Qin, Lei
    Huang, Zefan
    Zhang, Chen
    Guo, Hongliang
    Ang, Marcelo, Jr.
    Rus, Daniela
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4108 - 4115
  • [4] Autonomous Navigation in Complex Environments using Memory-Aided Deep Reinforcement Learning
    Kastner, Linh
    Shen, Zhengcheng
    Marx, Cornelius
    Lambrecht, Jens
    2021 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2021, : 170 - 175
  • [5] Applied Imitation Learning for Autonomous Navigation in Complex Natural Terrain
    Silver, David
    Bagnell, J. Andrew
    Stentz, Anthony
    FIELD AND SERVICE ROBOTICS, 2010, 62 : 249 - 259
  • [6] Applied Imitation Learning for Autonomous Navigation in Complex Natural Terrain
    Silver D.
    Andrew Bagnell J.
    Stentz A.
    Springer Tracts in Advanced Robotics, 2010, 62 : 249 - 259
  • [7] Deep Reinforcement Learning for Autonomous Drone Navigation in Cluttered Environments
    Solaimalai, Gautam
    Prakash, Kode Jaya
    Kumar, Sampath S.
    Bhagyalakshmi, A.
    Siddharthan, P.
    Kumar, Senthil K. R.
    2024 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATION AND APPLIED INFORMATICS, ACCAI 2024, 2024,
  • [8] Navigation of autonomous vehicles in unknown environments using reinforcement learning
    Martinez-Marin, Tomas
    Rodriguez, Rafael
    2007 IEEE INTELLIGENT VEHICLES SYMPOSIUM, VOLS 1-3, 2007, : 964 - +
  • [9] Autonomous Navigation of UAVs in Large-Scale Complex Environments: A Deep Reinforcement Learning Approach
    Wang, Chao
    Wang, Jian
    Shen, Yuan
    Zhang, Xudong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (03) : 2124 - 2136
  • [10] Intervention Force-based Imitation Learning for Autonomous Navigation in Dynamic Environments
    Yokoyama, Tomoya
    Seiya, Shunya
    Takeuchi, Eijiro
    Takeda, Kazuya
    2020 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2020, : 1679 - 1688