Autonomous Landing of the Quadrotor on the Mobile Platform via Meta Reinforcement Learning

被引:0
|
作者
Cao, Qianqian [1 ,2 ]
Liu, Ziyi [1 ,2 ]
Yu, Hai [1 ,2 ]
Liang, Xiao [1 ,2 ]
Fang, Yongchun [1 ,2 ]
机构
[1] Nankai Univ, Coll Artificial Intelligence, Inst Robot & Automat Informat Syst, Tianjin 300350, Peoples R China
[2] Nankai Univ, Tianjin Key Lab Intelligent Robot, Tianjin 300350, Peoples R China
基金
中国国家自然科学基金;
关键词
Quadrotor; meta reinforcement learning; autonomous landing; trajectory planning and control;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Landing a quadrotor on a mobile platform moving with various unknown trajectories presents special challenges, including the requirements of fast trajectory planning/replanning, accurate control, and the adaptability for different target trajectories, especially when the platform is non-cooperative. However, previous works either assume the platform moves along a predefined trajectory or decouple planning from control which may cause a delay in tracking. In this work, we integrate planning and control into a unified framework and present an efficient off-policy Meta-Reinforcement Learning (Meta-RL) algorithm that enables a quadrotor (agent) to land on a mobile platform with various unknown trajectories autonomously. In our approach, we disentangle task-specific policy parameters by a separate adapter network to shared low-level parameters and learn a probabilistic encoder to extract common structures across different tasks. Specifically, during meta-training, we sample different trajectories from the task distribution, and then the probabilistic encoder accumulates the necessary statistics from past experience into the latent variables that enable the policy to perform the task. At meta-testing time, when the quadrotor is faced with an unseen trajectory, the latent variables can be sampled according to past interactions between the quadrotor and the mobile platform and held constant during an episode, enabling rapid trajectory-level adaptation. We assume similar tasks share a common low-dimensional structure in the representation of the policy network and the task-specific information is learned in the head of the policy. Accordingly, we further propose a separate adapter net as a supervised learning problem. The adapter net learns the weights of the policy's output layer for each meta-training task given by the environment interactions from the agent. When adapting to a new task during meta-testing, we fix the shared model layers and predict the head weights for the new task using the trained adapter network. This ensures that the pretrained policy can efficiently adapt to different tasks, which boosts the out-of-distribution performance. Our method can directly control the pitch, roll, yaw angle, and thrust of the quadrotor, yielding a fast response to the trajectory change. Simulation results show the superiority of our method both in success rate and adaptation efficiency over other RL algorithms on meta-testing tasks. The real-world experimental results compared with traditional planning and control algorithms demonstrate the satisfactory performance of our autonomous landing method, especially its robustness in adapting to unknown dynamics. Note to Practitioners-Given the challenge posed by the motion uncertainty when a quadrotor lands on a mobile platform with an unknown trajectory, there hasn't been a well-established solution, as far as we know. This paper introduces meta-reinforcement learning, incorporating a latent variable encoder to extract common features from training tasks, and designing an adapter network to enhance the ability of policy networks to adapt to new tasks, thereby enhancing the landing performance of the agent. The proposed method demonstrates promising results in both simulation and experiments.
引用
收藏
页码:2269 / 2280
页数:12
相关论文
共 50 条
  • [11] A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Moving Platform
    Alejandro Rodriguez-Ramos
    Carlos Sampedro
    Hriday Bavle
    Paloma de la Puente
    Pascual Campoy
    Journal of Intelligent & Robotic Systems, 2019, 93 : 351 - 366
  • [12] Autonomous Planetary Landing via Deep Reinforcement Learning and Transfer Learning
    Ciabatti, Giulia
    Daftry, Shreyansh
    Capobianco, Roberto
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2031 - 2038
  • [13] Optimal controller design for autonomous quadrotor landing on moving platform
    Cengiz, Said Kemal
    Ucun, Levent
    Simulation Modelling Practice and Theory, 2022, 119
  • [14] Optimal controller design for autonomous quadrotor landing on moving platform
    Cengiz, Said Kemal
    Ucun, Levent
    SIMULATION MODELLING PRACTICE AND THEORY, 2022, 119
  • [15] Vision-based Autonomous Quadrotor Landing on a Moving Platform
    Falanga, Davide
    Zanchettin, Alessio
    Simovic, Alessandro
    Delmerico, Jeffrey
    Scaramuzza, Davide
    2017 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY AND RESCUE ROBOTICS (SSRR), 2017, : 200 - 207
  • [16] Inclined Quadrotor Landing using Deep Reinforcement Learning
    Kooi, Jacob E.
    Babuska, Robert
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2361 - 2368
  • [17] Precision landing of autonomous parafoil system via deep reinforcement learning
    Wei, Zhenyu
    Shao, Zhijiang
    2024 IEEE AEROSPACE CONFERENCE, 2024,
  • [18] Autonomous landing of a quadrotor on a moving platform using motion capture system
    Qassab, Ayman
    Khan, Muhammad Umer
    Irfanoglu, Bulent
    DISCOVER APPLIED SCIENCES, 2024, 6 (06)
  • [19] Autonomous landing solution of low-cost quadrotor on a moving platform
    Qi, Yuhua
    Jiang, Jiaqi
    Wu, Jin
    Wang, Jianan
    Wang, Chunyan
    Shan, Jiayuan
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 119 : 64 - 76
  • [20] Dynamic Landing of an Autonomous Quadrotor on a Moving Platform in Turbulent Wind Conditions
    Paris, Aleix
    Lopez, Brett T.
    How, Jonathan P.
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 9577 - 9583