Hybrid Soft Actor-Critic and Incremental Dual Heuristic Programming Reinforcement Learning for Fault-Tolerant Flight Control

被引:0
|
作者
Teirlinck, C. [1 ]
van Kampen, Erik-Jan [1 ]
机构
[1] Delft Univ Technol, Control & Simulat, POB 5058, NL-2600 GB Delft, Netherlands
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Recent advancements in fault-tolerant flight control have involved model-free offline and online Reinforcement Learning (RL) algorithms in order to provide robust and adaptive control to autonomous systems. Inspired by recent work on Incremental Dual Heuristic Programming (IDHP) and Soft Actor-Critic (SAC), this research proposes a hybrid SAC-IDHP framework aiming to combine adaptive online learning from IDHP with the high complexity generalization power of SAC in controlling a fully coupled system. The hybrid framework is implemented into the inner loop of a cascaded altitude controller for a high-fidelity, six-degree-of-freedom model of the Cessna Citation II PH-LAB research aircraft. Compared to SAC-only, the SAC-IDHP hybrid demonstrates an improvement in tracking performance of 0.74%, 5.46% and 0.82% in nMAE for nominal case, longitudinal and lateral failure cases respectively. Random online policy initialization is eliminated due to identity initialization of the hybrid policy, resulting in an argument for increased safety. Additionally, robustness to biased sensor noise, initial flight condition and random critic initialization is demonstrated.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Evolutionary Reinforcement Learning: Hybrid Approach for Safety-Informed Fault-Tolerant Flight Control
    Gavra, Vlad
    van Kampen, Erik-Jan
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2024, 47 (05) : 887 - 900
  • [22] SOFT ACTOR-CRITIC REINFORCEMENT LEARNING FOR ROBOTIC MANIPULATOR WITH HINDSIGHT EXPERIENCE REPLAY
    Yan, Tao
    Zhang, Wenan
    Yang, Simon X.
    Yu, Li
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2019, 34 (05): : 536 - 543
  • [23] SAC-FACT: Soft Actor-Critic Reinforcement Learning for Counterfactual Explanations
    Ezzeddine, Fatima
    Ayoub, Omran
    Andreoletti, Davide
    Giordano, Silvia
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 195 - 216
  • [24] CONTROLLED SENSING AND ANOMALY DETECTION VIA SOFT ACTOR-CRITIC REINFORCEMENT LEARNING
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4198 - 4202
  • [25] A soft actor-critic reinforcement learning framework for optimal energy management in electric vehicles with hybrid storage
    Mazzi, Yahia
    Ben Sassi, Hicham
    Errahimi, Fatima
    Es-Sbai, Najia
    JOURNAL OF ENERGY STORAGE, 2024, 99
  • [26] Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach
    Pan, Jie
    Huang, Jingwei
    Cheng, Gengdong
    Zeng, Yong
    NEURAL NETWORKS, 2023, 157 : 288 - 304
  • [27] Multi-agent dual actor-critic framework for reinforcement learning navigation
    Xiong, Fengguang
    Zhang, Yaodan
    Kuang, Xinhe
    He, Ligang
    Han, Xie
    APPLIED INTELLIGENCE, 2025, 55 (02)
  • [28] Enhancing HVAC Control Systems Using a Steady Soft Actor-Critic Deep Reinforcement Learning Approach
    Sun, Hongtao
    Hu, Yushuang
    Luo, Jinlu
    Guo, Qiongyu
    Zhao, Jianzhe
    BUILDINGS, 2025, 15 (04)
  • [29] Hybrid actor-critic algorithm for quantum reinforcement learning at CERN beam lines
    Schenk, Michael
    Combarro, Elias F.
    Grossi, Michele
    Kain, Verena
    Li, Kevin Shing Bruce
    Popa, Mircea-Marian
    Vallecorsa, Sofia
    QUANTUM SCIENCE AND TECHNOLOGY, 2024, 9 (02)
  • [30] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
    Haarnoja, Tuomas
    Zhou, Aurick
    Abbeel, Pieter
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80