Acceleration control strategy for aero-engines based on model-free deep reinforcement learning method

被引:0
|
作者
Gao, Wenbo [1 ]
Zhou, Xin [1 ]
Pan, Muxuan [1 ]
Zhou, Wenxiang [1 ]
Lu, Feng [1 ]
Huang, Jinquan [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Energy & Power Engn, Jiangsu Prov Key Lab Power Syst, Nanjing 210016, Peoples R China
关键词
Reinforcement learning; Neural network; Aero-engine control; Nonlinear system;
D O I
暂无
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Deep reinforcement learning has emerged as a powerful control method, especially for the complex nonlinear system such as aeroengine control system, due to its strong representation ability and capability of learning from data measurements. This paper presents a novel control strategy based on deep reinforcement learning to speed up the acceleration process of aero-engine. The actor-critic framework is adopted, where the actor neural network aims to find the optimal control policy and the critic network aims to evaluate the current control policy. The deep deterministic policy gradient algorithm is used to update the parameters of the neural networks. In addition, a complementary integrator is introduced to eliminate the steady-state error caused by the approximation error of the deep neural networks, and a momentum term is introduced to set limits for the input of the control system, thereby suppressing the overrun during the early learning and exploration processes. The numerical simulations show that the controller with this new control strategy can cope with different flight conditions, and significantly speed up the acceleration process of aero-engine. (C) 2021 Elsevier Masson SAS. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Acceleration control strategy for aero-engines based on model-free deep reinforcement learning method
    Gao, Wenbo
    Zhou, Xin
    Pan, Muxuan
    Zhou, Wenxiang
    Lu, Feng
    Huang, Jinquan
    [J]. AEROSPACE SCIENCE AND TECHNOLOGY, 2022, 120
  • [2] An Improved Model-Free Adaptive Control Algorithm and the Application in Aero-engines
    Liu, Xiao-Yu
    Sun, Xi-Ming
    Zhang, Yong-Liang
    Wang, Xue-Fang
    Wen, Si-Xin
    [J]. PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 1940 - 1945
  • [3] Acceleration Control Design for Turbofan Aero-engines Based on A Switching Control Strategy
    Chen, Chao
    Ma, Dan
    Mao, Xiaoqi
    Sun, Haobo
    [J]. PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 7 - 12
  • [4] Overshoot-free acceleration of aero-engines: An energy-based switching control method
    Wang, Xia
    Zhao, Jun
    Sun, Xi-Ming
    [J]. CONTROL ENGINEERING PRACTICE, 2016, 47 : 28 - 36
  • [5] An Adaptive Model-Free Control Method for Metro Train Based on Deep Reinforcement Learning
    Lai, Wenzhu
    Chen, Dewang
    Huang, Yunhu
    Huang, Benzun
    [J]. ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022, 2023, 153 : 263 - 273
  • [6] A Safety Protection Control Strategy for Aero-engines
    Shi Yan
    Zhao Jun
    [J]. PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE 2016, 2016, : 2454 - 2458
  • [7] Novel Model-free Optimal Active Vibration Control Strategy Based on Deep Reinforcement Learning
    Zhang, Yi-Ang
    Zhu, Songye
    [J]. STRUCTURAL CONTROL & HEALTH MONITORING, 2023, 2023
  • [8] Nonlinear Model Predictive Control Strategy for Limit Management of Aero-Engines
    Du, Xian
    Ma, Yan-Hua
    Sun, Xi-Ming
    [J]. INTERNATIONAL JOURNAL OF TURBO & JET-ENGINES, 2022, 39 (03) : 427 - 438
  • [9] Model-free Based Reinforcement Learning Control Strategy of Aircraft Attitude Systems
    Huang, Dingcui
    Hu, Jiangping
    Peng, Zhinan
    Chen, Bo
    Hao, Mingrui
    Ghosh, Bijoy Kumar
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 743 - 748
  • [10] Reinforcement Learning-Based Fault Tolerant Control Design for Aero-Engines With Multiple Types of Faults
    Qian, Moshu
    Jiang, Bin
    Sun, Chenglin
    Shi, Jiantao
    Bo, Cuimei
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2024,