A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Platform

被引:2
|
作者
Jiang, Zhiling [1 ]
Song, Guanghua [1 ]
机构
[1] Zhejiang Univ, Sch Aeronaut & Astronaut, Hangzhou, Peoples R China
关键词
Deep Reinforcement Learning; Continuous Action Space; UAV Autonomous Landing; Gazebo & ROS;
D O I
10.1109/ICRSS57469.2022.00031
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned Aerial Vehicle (UAV) is increasingly becoming an important tool used for a variety of tasks. In addition, Reinforcement Learning (RL) is a popular research topic. In this paper, these two fields are combined together and we apply the reinforcement learning into the UAV field, promote the application of reinforcement learning in our real life. We design a reinforcement learning framework named ROS-RL, this framework is based on the physical simulation platform Gazebo and it can address the problem of UAV motion in continuous action space. We can connect our algorithms into this framework through ROS and train the agent to control the drone to complete some tasks. We realize the autonomous landing task of UAV using three different reinforcement learning algorithms in this framework. The experiment results show the effectiveness of algorithm in controlling UAV which flights in a simulation environment close to the real world.
引用
收藏
页码:104 / 109
页数:6
相关论文
共 50 条
  • [1] A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving Platform
    Alejandro Rodriguez-Ramos
    Carlos Sampedro
    Hriday Bavle
    Paloma de la Puente
    Pascual Campoy
    [J]. Journal of Intelligent & Robotic Systems, 2019, 93 : 351 - 366
  • [2] A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving Platform
    Rodriguez-Ramos, Alejandro
    Sampedro, Carlos
    Bavle, Hriday
    de la Puente, Paloma
    Campoy, Pascual
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2019, 93 (1-2) : 351 - 366
  • [3] UAV Autonomous Tracking and Landing Based on Deep Reinforcement Learning Strategy
    Xie, Jingyi
    Peng, Xiaodong
    Wang, Haijiao
    Niu, Wenlong
    Zheng, Xiao
    [J]. SENSORS, 2020, 20 (19) : 1 - 17
  • [4] Deep Reinforcement Learning with Corrective Feedback for Autonomous UAV Landing on a Mobile Platform
    Wu, Lizhen
    Wang, Chang
    Zhang, Pengpeng
    Wei, Changyun
    [J]. DRONES, 2022, 6 (09)
  • [5] PID with Deep Reinforcement Learning and Heuristic Rules for Autonomous UAV Landing
    Yuan, Man
    Wang, Chang
    Zhang, Pengpeng
    Wei, Changyun
    [J]. PROCEEDINGS OF 2022 INTERNATIONAL CONFERENCE ON AUTONOMOUS UNMANNED SYSTEMS, ICAUS 2022, 2023, 1010 : 1876 - 1884
  • [6] Autonomous Landing on a Moving Platform Using Vision-Based Deep Reinforcement Learning
    Ladosz, Pawel
    Mammadov, Meraj
    Shin, Heejung
    Shin, Woojae
    Oh, Hyondong
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4575 - 4582
  • [7] Toward End-to-End Control for UAV Autonomous Landing via Deep Reinforcement Learning
    Polvara, Riccardo
    Patacchiola, Massimiliano
    Sharma, Sanjay
    Wan, Jian
    Manning, Andrew
    Sutton, Robert
    Cangelosi, Angelo
    [J]. 2018 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), 2018, : 115 - 123
  • [8] A Deep Reinforcement Learning Technique for Vision-Based Autonomous Multirotor Landing on a Moving Platform
    Rodriguez-Ramos, Alejandro
    Sampedro, Carlos
    Bavle, Hriday
    Gil Moreno, Ignacio
    Campoy, Pascual
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1010 - 1017
  • [9] Autonomous Planetary Landing via Deep Reinforcement Learning and Transfer Learning
    Ciabatti, Giulia
    Daftry, Shreyansh
    Capobianco, Roberto
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2031 - 2038
  • [10] Autonomous Emergency Landing for Multicopters using Deep Reinforcement Learning
    Bartolomei, Luca
    Kompis, Yves
    Teixeira, Lucas
    Chli, Margarita
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 3392 - 3399