Omnidirectional Autonomous Aggressive Perching of Unmanned Aerial Vehicle using Reinforcement Learning Trajectory Generation and Control

被引:1
|
作者
Huang, Yu-Ting [1 ]
Pi, Chen-Huan [1 ]
Cheng, Stone [1 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Dept Mech Engn, Hsinchu, Taiwan
关键词
Aggressive flight; reinforcement learning; quadrotor;
D O I
10.1109/SCISISIS55246.2022.10002100
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Micro aerial vehicles are widely being researched and employed due to their relative low operation costs and high flexibility in various applications. We study the under-actuated quadrotor perching problem, designing a trajectory planner and controller which generates feasible trajectories and drives quadrotors to desired state in state space. This paper proposes a trajectory generating and tracking method for quadrotor perching that takes the advantages of reinforcement learning controller and traditional controller. We demonstrate the performance of the trained reinforcement learning controller generated trajectory information and manipulated quadrotor toward the perching point (manually throwing it up in the air with an initial velocity of 1 m/s). We show that this approach permits the control structure of trajectories and controllers enabling such aggressive maneuvers perching on vertical surfaces with relatively accurate. Computation time of evaluating the policy is only 0.03 sec per trajectory, which is two orders of magnitude less than common trajectory optimization algorithms with an approximated model.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Upgraded trajectory planning method deployed in autonomous exploration for unmanned aerial vehicle
    Zhang, Tong
    Yu, Jiajie
    Li, Jiaqi
    Wei, Jianli
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2022, 19 (04)
  • [32] Unmanned Aerial Vehicle Pitch Control Using Deep Reinforcement Learning with Discrete Actions in Wind Tunnel Test
    Wada, Daichi
    Araujo-Estrada, Sergio A.
    Windsor, Shane
    [J]. AEROSPACE, 2021, 8 (01) : 1 - 16
  • [33] Docking Control of an Autonomous Underwater Vehicle Using Reinforcement Learning
    Anderlini, Enrico
    Parker, Gordon G.
    Thomas, Giles
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (17):
  • [34] Wireless Control of Autonomous Guided Vehicle Using Reinforcement Learning
    Ana, Pedro M. de Sant
    Marchenko, Nikolaj
    Popovski, Petar
    Soret, Beatriz
    [J]. 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [35] Robust Trajectory Tracking Control for a Quadrotor Unmanned Aerial Vehicle Using Disturbance Observer
    Yang, Yi
    Wu, Qingxian
    Chen, Mou
    [J]. PROCEEDINGS OF THE 2016 12TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2016, : 697 - 702
  • [36] Vision-Guided Contact Force Control of an Omnidirectional Unmanned Aerial Vehicle
    Xu, Mengxin
    Li, Kai
    Hu, An
    Wang, Hesheng
    [J]. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2024, 60 (04) : 4419 - 4428
  • [37] Trajectory tracking of unmanned aerial vehicle using servomechanism strategy
    Kakirde, NP
    Davari, A
    Wang, J
    [J]. PROCEEDINGS OF THE THIRTY-SEVENTH SOUTHEASTERN SYMPOSIUM ON SYSTEM THEORY, 2005, : 163 - 166
  • [38] Deep Reinforcement Learning Based Optimal Trajectory Tracking Control of Autonomous Underwater Vehicle
    Yu, Runsheng
    Shi, Zhenyu
    Huang, Chaoxing
    Li, Tenglong
    Ma, Qiongxiong
    [J]. PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, : 4958 - 4965
  • [39] Attitude Control and Trajectory tracking of an Autonomous Miniature Aerial Vehicle
    Haddadi, Seyed Jamal
    Emamagholi, Omid
    Javidi, Farahnaz
    Fakharian, Ahmad
    [J]. 2015 AI & ROBOTICS (IRANOPEN), 2015,
  • [40] Visual Navigation and Landing Control of an Unmanned Aerial Vehicle on a Moving Autonomous Surface Vehicle via Adaptive Learning
    Zhang, Hai-Tao
    Hu, Bin-Bin
    Xu, Zhecheng
    Cai, Zhi
    Liu, Bin
    Wang, Xudong
    Geng, Tao
    Zhong, Sheng
    Zhao, Jin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5345 - 5355