Deep Reinforcement Learning for Vision-Based Navigation of UAVs in Avoiding Stationary and Mobile Obstacles

被引:15
|
作者
Kalidas, Amudhini P. [1 ]
Joshua, Christy Jackson [1 ]
Md, Abdul Quadir [1 ]
Basheer, Shakila [2 ]
Mohan, Senthilkumar [3 ]
Sakri, Sapiah [2 ]
机构
[1] Vellore Inst Technol, Sch Comp Sci & Engn, Chennai 600127, India
[2] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Syst, POB 84428, Riyadh 11671, Saudi Arabia
[3] Vellore Inst Technol, Sch Informat Technol & Engn, Vellore 632014, India
关键词
autonomous navigation; collision avoidance; deep reinforcement learning; drones;
D O I
10.3390/drones7040245
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Unmanned Aerial Vehicles (UAVs), also known as drones, have advanced greatly in recent years. There are many ways in which drones can be used, including transportation, photography, climate monitoring, and disaster relief. The reason for this is their high level of efficiency and safety in all operations. While the design of drones strives for perfection, it is not yet flawless. When it comes to detecting and preventing collisions, drones still face many challenges. In this context, this paper describes a methodology for developing a drone system that operates autonomously without the need for human intervention. This study applies reinforcement learning algorithms to train a drone to avoid obstacles autonomously in discrete and continuous action spaces based solely on image data. The novelty of this study lies in its comprehensive assessment of the advantages, limitations, and future research directions of obstacle detection and avoidance for drones, using different reinforcement learning techniques. This study compares three different reinforcement learning strategies-namely, Deep Q-Networks (DQN), Proximal Policy Optimization (PPO), and Soft Actor-Critic (SAC)-that can assist in avoiding obstacles, both stationary and moving; however, these strategies have been more successful in drones. The experiment has been carried out in a virtual environment made available by AirSim. Using Unreal Engine 4, the various training and testing scenarios were created for understanding and analyzing the behavior of RL algorithms for drones. According to the training results, SAC outperformed the other two algorithms. PPO was the least successful among the algorithms, indicating that on-policy algorithms are ineffective in extensive 3D environments with dynamic actors. DQN and SAC, two off-policy algorithms, produced encouraging outcomes. However, due to its constrained discrete action space, DQN may not be as advantageous as SAC in narrow pathways and twists. Concerning further findings, when it comes to autonomous drones, off-policy algorithms, such as DQN and SAC, perform more effectively than on-policy algorithms, such as PPO. The findings could have practical implications for the development of safer and more efficient drones in the future.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Deep Reinforcement Learning Based Mobile Robot Navigation in Crowd Environments
    Yang, Guang
    Guo, Yi
    [J]. 2024 21ST INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR 2024, 2024, : 513 - 519
  • [32] A Vision-based Navigation System of Mobile Tracking Robot
    Wu, Jie
    Snasel, Vaclav
    Abraham, Ajith
    [J]. IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [33] Learning Efficient Policies for Vision-based Navigation
    Hornung, Armin
    Strasdat, Hauke
    Bennewitz, Maren
    Burgard, Wolfram
    [J]. 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2009, : 4590 - +
  • [34] A novel mobile robot navigation method based on deep reinforcement learning
    Quan, Hao
    Li, Yansheng
    Zhang, Yi
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2020, 17 (03):
  • [35] Vision-Based Autonomous Navigation with Evolutionary Learning
    Moya-Albor, Ernesto
    Ponce, Hiram
    Brieva, Jorge
    Coronel, Sandra L.
    Chavez-Domingue, Rodrigo
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2020, PT II, 2020, 12469 : 459 - 471
  • [36] Vision-based lane detection for mobile robot navigation
    Fang, Hao
    Wang, Haifeng
    Zhang, Ze
    Qiu, Liying
    [J]. Proceedings of the World Congress on Intelligent Control and Automation (WCICA), 2008, : 3854 - 3857
  • [37] Vision-based Lane Detection for Mobile Robot Navigation
    Fang, Hao
    Wang, Haifeng
    Zhang, Ze
    Qiu, Liying
    [J]. 2008 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-23, 2008, : 3858 - +
  • [38] Vision-based outdoor navigation using mobile robots
    Wolf, Denis F.
    Netto, Celio N. S.
    [J]. 2008 5TH LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS 2008), 2008, : 30 - +
  • [39] Autonomous Landing on a Moving Platform Using Vision-Based Deep Reinforcement Learning
    Ladosz, Pawel
    Mammadov, Meraj
    Shin, Heejung
    Shin, Woojae
    Oh, Hyondong
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4575 - 4582
  • [40] An Experimental Study on State Representation Extraction for Vision-Based Deep Reinforcement Learning
    Ren, JunkaiY
    Zeng, Yujun
    Zhou, Sihang
    Zhang, Yichuan
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (21):