Deep Reinforcement Learning for Vision-Based Navigation of UAVs in Avoiding Stationary and Mobile Obstacles

被引:15
|
作者
Kalidas, Amudhini P. [1 ]
Joshua, Christy Jackson [1 ]
Md, Abdul Quadir [1 ]
Basheer, Shakila [2 ]
Mohan, Senthilkumar [3 ]
Sakri, Sapiah [2 ]
机构
[1] Vellore Inst Technol, Sch Comp Sci & Engn, Chennai 600127, India
[2] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Syst, POB 84428, Riyadh 11671, Saudi Arabia
[3] Vellore Inst Technol, Sch Informat Technol & Engn, Vellore 632014, India
关键词
autonomous navigation; collision avoidance; deep reinforcement learning; drones;
D O I
10.3390/drones7040245
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Unmanned Aerial Vehicles (UAVs), also known as drones, have advanced greatly in recent years. There are many ways in which drones can be used, including transportation, photography, climate monitoring, and disaster relief. The reason for this is their high level of efficiency and safety in all operations. While the design of drones strives for perfection, it is not yet flawless. When it comes to detecting and preventing collisions, drones still face many challenges. In this context, this paper describes a methodology for developing a drone system that operates autonomously without the need for human intervention. This study applies reinforcement learning algorithms to train a drone to avoid obstacles autonomously in discrete and continuous action spaces based solely on image data. The novelty of this study lies in its comprehensive assessment of the advantages, limitations, and future research directions of obstacle detection and avoidance for drones, using different reinforcement learning techniques. This study compares three different reinforcement learning strategies-namely, Deep Q-Networks (DQN), Proximal Policy Optimization (PPO), and Soft Actor-Critic (SAC)-that can assist in avoiding obstacles, both stationary and moving; however, these strategies have been more successful in drones. The experiment has been carried out in a virtual environment made available by AirSim. Using Unreal Engine 4, the various training and testing scenarios were created for understanding and analyzing the behavior of RL algorithms for drones. According to the training results, SAC outperformed the other two algorithms. PPO was the least successful among the algorithms, indicating that on-policy algorithms are ineffective in extensive 3D environments with dynamic actors. DQN and SAC, two off-policy algorithms, produced encouraging outcomes. However, due to its constrained discrete action space, DQN may not be as advantageous as SAC in narrow pathways and twists. Concerning further findings, when it comes to autonomous drones, off-policy algorithms, such as DQN and SAC, perform more effectively than on-policy algorithms, such as PPO. The findings could have practical implications for the development of safer and more efficient drones in the future.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Vision-based control in the open racing car simulator with deep and reinforcement learning
    Zhu Y.
    Zhao D.
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (12) : 15673 - 15685
  • [42] Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement Learning
    Cai, Peide
    Wang, Hengli
    Huang, Huaiyang
    Liu, Yuxuan
    Liu, Ming
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04): : 7262 - 7269
  • [43] Towards monocular vision-based autonomous flight through deep reinforcement learning
    Kim, Minwoo
    Kim, Jongyun
    Jung, Minjae
    Oh, Hyondong
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 198
  • [44] A Vision-based Irregular Obstacle Avoidance Framework via Deep Reinforcement Learning
    Gao, Lingping
    Ding, Jianchuan
    Liu, Wenxi
    Piao, Haiyin
    Wang, Yuxin
    Yang, Xin
    Yin, Baocai
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 9262 - 9269
  • [45] Vision-Based Robotic Object Grasping-A Deep Reinforcement Learning Approach
    Chen, Ya-Ling
    Cai, Yan-Rou
    Cheng, Ming-Yang
    [J]. MACHINES, 2023, 11 (02)
  • [46] Vision Based Autonomous Tracking of UAVs Based on Reinforcement Learning
    Xiong, Guohong
    Dong, Lu
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 2682 - 2686
  • [47] Mobile Robot Navigation Using Deep Reinforcement Learning
    Lee, Min-Fan Ricky
    Yusuf, Sharfiden Hassen
    [J]. PROCESSES, 2022, 10 (12)
  • [48] Joint Vision-Based Navigation, Control and Obstacle Avoidance for UAVs in Dynamic Environments
    Potena, Ciro
    Nardi, Daniele
    Pretto, Alberto
    [J]. 2019 EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR), 2019,
  • [49] A Study on Vision-based Mobile Robot Learning by Deep Q-network
    Sasaki, Hikaru
    Horiuchi, Tadashi
    Kato, Satoru
    [J]. 2017 56TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE), 2017, : 799 - 804
  • [50] Special Issue on High-Speed Vision-Based Autonomous Navigation of UAVs
    Loianno, Giuseppe
    Scaramuzza, Davide
    Kumar, Vijay
    [J]. JOURNAL OF FIELD ROBOTICS, 2018, 35 (01) : 3 - 4