Hybrid Soft Actor-Critic and Incremental Dual Heuristic Programming Reinforcement Learning for Fault-Tolerant Flight Control

被引:0
|
作者
Teirlinck, C. [1 ]
van Kampen, Erik-Jan [1 ]
机构
[1] Delft Univ Technol, Control & Simulat, POB 5058, NL-2600 GB Delft, Netherlands
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Recent advancements in fault-tolerant flight control have involved model-free offline and online Reinforcement Learning (RL) algorithms in order to provide robust and adaptive control to autonomous systems. Inspired by recent work on Incremental Dual Heuristic Programming (IDHP) and Soft Actor-Critic (SAC), this research proposes a hybrid SAC-IDHP framework aiming to combine adaptive online learning from IDHP with the high complexity generalization power of SAC in controlling a fully coupled system. The hybrid framework is implemented into the inner loop of a cascaded altitude controller for a high-fidelity, six-degree-of-freedom model of the Cessna Citation II PH-LAB research aircraft. Compared to SAC-only, the SAC-IDHP hybrid demonstrates an improvement in tracking performance of 0.74%, 5.46% and 0.82% in nMAE for nominal case, longitudinal and lateral failure cases respectively. Random online policy initialization is eliminated due to identity initialization of the hybrid policy, resulting in an argument for increased safety. Additionally, robustness to biased sensor noise, initial flight condition and random critic initialization is demonstrated.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Network Congestion Control Algorithm Based on Actor-Critic Reinforcement Learning Model
    Xu, Tao
    Gong, Lina
    Zhang, Wei
    Li, Xuhong
    Wang, Xia
    Pan, Wenwen
    ADVANCES IN MATERIALS, MACHINERY, ELECTRONICS II, 2018, 1955
  • [42] Smart energy management for hybrid electric bus via improved soft actor-critic algorithm in a heuristic learning framework
    Huang, Ruchen
    He, Hongwen
    Su, Qicong
    ENERGY, 2024, 309
  • [43] Power Allocation in Dual Connectivity Networks Based on Actor-Critic Deep Reinforcement Learning
    Moein, Elham
    Hasibi, Ramin
    Shokri, Matin
    Rasti, Mehdi
    17TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT 2019), 2019, : 170 - 177
  • [44] Power Allocation in HetNets with Hybrid Energy Supply Using Actor-Critic Reinforcement Learning
    Wei, Yifei
    Zhang, Zhiqiang
    Yu, F. Richard
    Han, Zhu
    GLOBECOM 2017 - 2017 IEEE GLOBAL COMMUNICATIONS CONFERENCE, 2017,
  • [45] Dual heuristic programming with just-in-time modeling for self-learning fault-tolerant control of mobile robots
    Zhang, Changxin
    Xu, Xin
    Zhang, Xinglong
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2023, 44 (03): : 1215 - 1234
  • [46] Graph Soft Actor-Critic Reinforcement Learning for Large-Scale Distributed Multirobot Coordination
    Hu, Yifan
    Fu, Junjie
    Wen, Guanghui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 12
  • [47] Bayesian Soft Actor-Critic: A Directed Acyclic Strategy Graph Based Deep Reinforcement Learning
    Yang, Qin
    Parasuraman, Ramviyas
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 646 - 648
  • [48] Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for Robotics Control With Action Constraints
    Kasaura, Kazumi
    Miura, Shuwa
    Kozuno, Tadashi
    Yonetani, Ryo
    Hoshino, Kenta
    Hosoe, Yohei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08) : 4449 - 4456
  • [49] Actor-Critic Traction Control Based on Reinforcement Learning with Open-Loop Training
    Drechsler, M. Funk
    Fiorentin, T. A.
    Goellinger, H.
    MODELLING AND SIMULATION IN ENGINEERING, 2021, 2021
  • [50] Digital Twin With Soft Actor-Critic Reinforcement Learning for Transitioning From Industry 4.0 to 5.0
    Asmat, Hamid
    Ud Din, Ikram
    Almogren, Ahmad
    Khan, Muhammad Yasar
    IEEE Access, 2025, 13 : 40577 - 40593