Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment

被引:19
|
作者
Bin Issa, Razin [1 ]
Das, Modhumonty [1 ]
Rahman, Md. Saferi [1 ]
Barua, Monika [1 ]
Rhaman, Md. Khalilur [1 ]
Ripon, Kazi Shah Nawaz [2 ]
Alam, Md. Golam Rabiul [1 ]
机构
[1] BRAC Univ, Sch Data & Sci, Dept Comp Sci & Engn, 66 Mohakhali, Dhaka 1212, Bangladesh
[2] Ostfold Univ Coll, Fac Comp Sci, N-1783 Halden, Norway
关键词
autonomous vehicle; reinforcement learning; Double Deep Q Learning; faster R-CNN; object classifier; markov decision process;
D O I
10.3390/s21041468
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.
引用
收藏
页码:1 / 24
页数:24
相关论文
共 50 条
  • [21] The autonomous navigation and obstacle avoidance for USVs with ANOA deep reinforcement learning method
    Wu, Xing
    Chen, Haolei
    Chen, Changgu
    Zhong, Mingyu
    Xie, Shaorong
    Guo, Yike
    Fujita, Hamido
    KNOWLEDGE-BASED SYSTEMS, 2020, 196 (196)
  • [22] Autonomous Obstacle Avoidance with Improved Deep Reinforcement Learning Based on Dynamic Huber Loss
    Xu, Xiaoming
    Li, Xian
    Chen, Na
    Zhao, Dongjie
    Chen, Chunmei
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [23] Obstacle Avoidance for AUV by Q-Learning based Guidance Vector Field
    Wu, Keqiao
    Yao, Peng
    PROCEEDINGS OF 2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS), 2020, : 702 - 707
  • [24] Autonomous obstacle avoidance of UAV based on deep reinforcement learning
    Yang, Songyue
    Yu, Guizhen
    Meng, Zhijun
    Wang, Zhangyu
    Li, Han
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (04) : 3323 - 3335
  • [26] Q-learning-based unmanned aerial vehicle path planning with dynamic obstacle avoidance
    Sonny, Amala
    Yeduri, Sreenivasa Reddy
    Cenkeramaddi, Linga Reddy
    APPLIED SOFT COMPUTING, 2023, 147
  • [27] Dynamic Obstacle Avoidance of Mobile Robots Using Real-Time Q-learning
    Kim, HoWon
    Lee, WonChang
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [28] A virtual simulation environment using deep learning for autonomous vehicles obstacle avoidance
    Meftah, Leila Haj
    Braham, Rafik
    2020 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SECURITY INFORMATICS (ISI), 2020, : 205 - 211
  • [29] The Q-learning obstacle avoidance algorithm based on EKF-SLAM for NAO autonomous walking under unknown environments
    Wen, Shuhuan
    Chen, Xiao
    Ma, Chunli
    Lam, H. K.
    Hua, Shaoyang
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 72 : 29 - 36
  • [30] Design of Obstacle Avoidance for Autonomous Vehicle Using Deep Q-Network and CARLA Simulator
    Terapaptommakol, Wasinee
    Phaoharuhansa, Danai
    Koowattanasuchat, Pramote
    Rajruangrabin, Jartuwat
    WORLD ELECTRIC VEHICLE JOURNAL, 2022, 13 (12):