Aerial filming with synchronized drones using reinforcement learning

被引:0
|
作者
Kenneth C. W Goh
Raymond B. C Ng
Yoke-Keong Wong
Nicholas J. H Ho
Matthew C. H Chua
机构
[1] National University of Singapore,Institute of Systems Science
来源
关键词
Aerial filming; Autonomous drones; Swarm formation control; Deep reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
Usage of multiple drones is necessary for aerial filming applications to ensure redundancy. However, this could inevitably contribute to higher risks of collisions, especially when the number of drones increases. Hence, this motivates us to explore various autonomous flight formation control methods that have the potential to enable multiple drones to effectively track a specific target at the same time. In this paper, we designed a model-free deep reinforcement learning algorithm, which is mainly based on the Deep Recurrent Q-Network concept, for the aforementioned purposes. The proposed algorithm was expanded into single and multi-agent types that enable multiple drones tracking while maintaining formation and preventing collision. The involved rewards in these approaches are two-dimensional in nature and are dependent on the communication system. Using Microsoft AirSim simulator, a virtual environment that includes four virtual drones was developed for experimental purposes. A comparison was made among various methods during the simulations, and the results concluded that the recurrent, single-agent model is the most effective method, being 33% more effective than its recurrent, multi-agent counterparts. The poor performance of the non-recurrent, single-agent baseline model also suggests that the recurrent elements in the network are essential to enable desirable multiple-drones flight.
引用
收藏
页码:18125 / 18150
页数:25
相关论文
共 50 条
  • [21] A Reinforcement Learning Based Service Scheduling Algorithm for Internet of Drones
    Pu, Cong
    2022 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2022, : 999 - 1004
  • [22] Beamforming for Maximal Coverage in mmWave Drones: A Reinforcement Learning Approach
    Vaezy, Hossein
    Salehi Heydar Abad, Mehdi
    Ercetin, Ozgur
    Yanikomeroglu, Halim
    Omidi, Mohammad Javad
    Naghsh, Mohammad Mahdi
    IEEE COMMUNICATIONS LETTERS, 2020, 24 (05) : 1033 - 1037
  • [23] Cooperative Planning for an Unmanned Combat Aerial Vehicle Fleet Using Reinforcement Learning
    Yuksek, Burak
    Demirezen, Mustafa Umut
    Inalhan, Gokhan
    Tsourdos, Antonios
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2021, 18 (10): : 739 - 750
  • [24] AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning
    Tallamraju, Rahul
    Saini, Nitin
    Bonetto, Elia
    Pabst, Michael
    Liu, Yu Tang
    Black, Michael J.
    Ahmad, Aamir
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6678 - 6685
  • [25] Autonomous Unmanned Aerial Vehicle navigation using Reinforcement Learning: A systematic review
    AlMahamid, Fadi
    Grolinger, Katarina
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 115
  • [26] Building Systems of Aerial Drones
    Mottola, Luca
    Wirstroem, Niklas
    Voigt, Thiemo
    ERCIM NEWS, 2014, (97): : 59 - 60
  • [27] Drones and the Future of Aerial Surveillance
    McNeal, Gregory S.
    GEORGE WASHINGTON LAW REVIEW, 2016, 84 (02) : 354 - 416
  • [28] Measurement of Antenna Patterns for Oceanographic Radars Using Aerial Drones
    Washburn, Libe
    Romero, Eduardo
    Johnson, Cyril
    Emery, Brian
    Gotschalk, Chris
    JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY, 2017, 34 (05) : 971 - 981
  • [29] Aerial drones for blood delivery
    Ling, Geoffrey
    Draghic, Nicole
    TRANSFUSION, 2019, 59 : 1608 - 1611
  • [30] Powering Aerial Surveillance Drones
    Saracin, Cristina Gabriela
    Deaconu, Ioan Dragos
    Chirila, Aurel Ionut
    2017 10TH INTERNATIONAL SYMPOSIUM ON ADVANCED TOPICS IN ELECTRICAL ENGINEERING (ATEE), 2017, : 237 - 240