Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment

被引:19
|
作者
Bin Issa, Razin [1 ]
Das, Modhumonty [1 ]
Rahman, Md. Saferi [1 ]
Barua, Monika [1 ]
Rhaman, Md. Khalilur [1 ]
Ripon, Kazi Shah Nawaz [2 ]
Alam, Md. Golam Rabiul [1 ]
机构
[1] BRAC Univ, Sch Data & Sci, Dept Comp Sci & Engn, 66 Mohakhali, Dhaka 1212, Bangladesh
[2] Ostfold Univ Coll, Fac Comp Sci, N-1783 Halden, Norway
关键词
autonomous vehicle; reinforcement learning; Double Deep Q Learning; faster R-CNN; object classifier; markov decision process;
D O I
10.3390/s21041468
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [1] Autonomous quadrotor obstacle avoidance based on dueling double deep recurrent Q-learning with monocular vision
    Ou, Jiajun
    Guo, Xiao
    Zhu, Ming
    Lou, Wenjie
    NEUROCOMPUTING, 2021, 441 : 300 - 310
  • [2] Q-learning Based Obstacle Avoidance Control of Autonomous Underwater Vehicle with Binocular Vision
    Zhang, Liang
    Yan, Jing
    Yang, Xian
    Luo, Xiaoyuan
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 4293 - 4298
  • [3] Q-Learning for autonomous vehicle navigation
    Gonzalez-Miranda, Oscar
    Miranda, Luis Antonio Lopez
    Ibarra-Zannatha, Juan Manuel
    2023 XXV ROBOTICS MEXICAN CONGRESS, COMROB, 2023, : 138 - 142
  • [4] Double action Q-learning for obstacle avoidance in a dynamically changing environment
    Ngai, DCK
    Yung, NHC
    2005 IEEE Intelligent Vehicles Symposium Proceedings, 2005, : 211 - 216
  • [5] Q-Learning for Autonomous Mobile Robot Obstacle Avoidance
    Ribeiro, Tiago
    Goncalves, Fernando
    Garcia, Ines
    Lopes, Gil
    Fernando Ribeiro, A.
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019), 2019, : 243 - 249
  • [6] Autonomous Navigation based on a Q-learning algorithm for a Robot in a Real Environment
    Strauss, Clement
    Sahin, Ferat
    2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEM OF SYSTEMS ENGINEERING (SOSE), 2008, : 361 - 365
  • [7] A Q-learning approach based on human reasoning for navigation in a dynamic environment
    Yuan, Rupeng
    Zhang, Fuhai
    Wang, Yu
    Fu, Yili
    Wang, Shuguo
    ROBOTICA, 2019, 37 (03) : 445 - 468
  • [8] Dynamic obstacle avoidance based on multi-sensor fusion and Q-learning algorithm
    Zhang, Yi
    Wei, Xin
    Zhou, Xiangyu
    PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), 2019, : 1569 - 1573
  • [9] Improved Q-Learning Applied to Dynamic Obstacle Avoidance and Path Planning
    Wang, Chunlei
    Yang, Xiao
    Li, He
    IEEE ACCESS, 2022, 10 : 92879 - 92888
  • [10] NAO robot obstacle avoidance based on fuzzy Q-learning
    Wen, Shuhuan
    Hu, Xueheng
    Li, Zhen
    Lam, Hak Keung
    Sun, Fuchun
    Fang, Bin
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2020, 47 (06): : 801 - 811