Nonholonomic Yaw Control of an Underactuated Flying Robot With Model-Based Reinforcement Learning

被引:7
|
作者
Lambert, Nathan O. [1 ]
Schindler, Craig B. [1 ]
Drew, Daniel S. [2 ]
Pister, Kristofer S. J. [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] Stanford Univ, Dept Mech Engn, Stanford, CA 94305 USA
关键词
Reinforcement learning; nonholonomic motion planning; aerial systems; mechanics and control; FLIGHT;
D O I
10.1109/LRA.2020.3045930
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Nonholonomic control is a candidate to control nonlinear systems with path-dependant states. We investigate an underactuated flying micro-aerial-vehicle, the ionocraft, that requires nonholonomic control in the yaw-direction for complete attitude control. Deploying an analytical control law involves substantial engineering design and is sensitive to inaccuracy in the system model. With specific assumptions on assembly and system dynamics, we derive a Lie bracket for yaw control of the ionocraft. As a comparison to the significant engineering effort required for an analytic control law, we implement a data-driven model-based reinforcement learning yaw controller in a simulated flight task. We demonstrate that a simple model-based reinforcement learning framework can match the derived Lie bracket control - in yaw rate and chosen actions - in a few minutes of flight data, without a pre-defined dynamics function. This letter shows that learning-based approaches are useful as a tool for synthesis of nonlinear control laws previously only addressable through expert-based design.
引用
收藏
页码:455 / 461
页数:7
相关论文
共 50 条
  • [21] Model-based hierarchical reinforcement learning and human action control
    Botvinick, Matthew
    Weinstein, Ari
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1655)
  • [22] Model-based Reinforcement Learning for Continuous Control with Posterior Sampling
    Fan, Ying
    Ming, Yifei
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [23] Adaptive optics control using model-based reinforcement learning
    Nousiainen, Jalo
    Rajani, Chang
    Kasper, Markus
    Helin, Tapio
    [J]. OPTICS EXPRESS, 2021, 29 (10): : 15327 - 15344
  • [24] Advances in model-based reinforcement learning for Adaptive Optics control
    Nousiainen, Jalo
    Engler, Byron
    Kasper, Markus
    Helin, Tapio
    Heritier, Cedric T.
    Rajani, Chang
    [J]. ADAPTIVE OPTICS SYSTEMS VIII, 2022, 12185
  • [25] Model Predictive Control of Quadruped Robot Based on Reinforcement Learning
    Zhang, Zhitong
    Chang, Xu
    Ma, Hongxu
    An, Honglei
    Lang, Lin
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [26] Position and Attitude Control of an Underactuated Flying Humanoid Robot
    Nava, Gabriele
    Fiorio, Luca
    Traversaro, Silvio
    Pucci, Daniele
    [J]. 2018 IEEE-RAS 18TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2018, : 986 - 993
  • [27] Hybrid LMC: Hybrid Learning and Model-based Control for Wheeled Humanoid Robot via Ensemble Deep Reinforcement Learning
    Baek, Donghoon
    Purushottam, Amartya
    Ramos, Joao
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9347 - 9354
  • [28] Model-Based Iterative Learning Control for Industrial Robot Manipulators
    Yeon, Je Sung
    Park, Jong Hyeon
    Son, Seung-Woo
    Lee, Sang-Hun
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 24 - +
  • [29] Model-Based Robot Learning Control with Uncertainty Directed Exploration
    Cao, Junjie
    Liu, Yong
    Yang, Jian
    Pan, Zaisheng
    [J]. 2020 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2020, : 2004 - 2010
  • [30] A survey on model-based reinforcement learning
    Fan-Ming Luo
    Tian Xu
    Hang Lai
    Xiong-Hui Chen
    Weinan Zhang
    Yang Yu
    [J]. Science China Information Sciences, 2024, 67