Reinforcement Learning based Data-driven Optimal Control Strategy for Systems with Disturbance

被引:0
|
作者
Fan, Zhong-Xin [1 ,2 ]
Li, Shihua [1 ,2 ]
Liu, Rongjie [3 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
[2] Minist Educ, Key Lab Measurement & Control Complex Syst Engn, Nanjing 210096, Peoples R China
[3] Florida State Univ, Dept Stat, Tallahassee, FL 32306 USA
关键词
Reinforcement learning; data-driven; input-output data; disturbance observer; adaptive dynamic programming; output feedback; OUTPUT REGULATION; ALGORITHM; OBSERVER;
D O I
10.1109/DDCLS58216.2023.10167230
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a partially model-free optimal control strategy for a class of continuous-time systems in a datadriven way. Although a series of optimal control have achieving superior performance, the following challenges still exist: (i) The controller designed based on the nominal system is difficult to cope with sudden disturbances. (ii) Feedback control is highly dependent on system dynamics and generally requires full state information. A novel composite control method combining output feedback reinforcement learning and input-output disturbance observer for these two challenges is concluded in this paper. Firstly, an output feedback policy iteration (PI) algorithm is given to acquire the feedback gain iteratively. Simultaneously, the observer continuously provides estimates of the disturbance. System dynamic information and states information are not needed to be known in advance in our approach, thus offering a higher degree of robustness and practical implementation prospects. Finally, an example is given to show the effectiveness of the proposed controller.
引用
收藏
页码:567 / 572
页数:6
相关论文
共 50 条
  • [1] Data-driven disturbance compensation control for discrete-time systems based on reinforcement learning
    Li, Lanyue
    Li, Jinna
    Cao, Jiangtao
    [J]. INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2024,
  • [2] A Data-Driven Energy Management Strategy Based on Deep Reinforcement Learning for Microgrid Systems
    Gang Bao
    Rui Xu
    [J]. Cognitive Computation, 2023, 15 : 739 - 750
  • [3] A Data-Driven Energy Management Strategy Based on Deep Reinforcement Learning for Microgrid Systems
    Bao, Gang
    Xu, Rui
    [J]. COGNITIVE COMPUTATION, 2023, 15 (02) : 739 - 750
  • [4] Data-Driven Flotation Industrial Process Operational Optimal Control Based on Reinforcement Learning
    Jiang, Yi
    Fan, Jialu
    Chai, Tianyou
    Li, Jinna
    Lewis, Frank L.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (05) : 1974 - 1989
  • [5] Data-driven Optimal Control Strategy for Virtual Synchronous Generator via Deep Reinforcement Learning Approach
    Yushuai Li
    Wei Gao
    Weihang Yan
    Shuo Huang
    Rui Wang
    Vahan Gevorgian
    David Wenzhong Gao
    [J]. Journal of Modern Power Systems and Clean Energy, 2021, 9 (04) : 919 - 929
  • [6] Data-driven Optimal Control Strategy for Virtual Synchronous Generator via Deep Reinforcement Learning Approach
    Li, Yushuai
    Gao, Wei
    Yan, Weihang
    Huang, Shuo
    Wang, Rui
    Gevorgian, Vahan
    Gao, David Wenzhong
    [J]. JOURNAL OF MODERN POWER SYSTEMS AND CLEAN ENERGY, 2021, 9 (04) : 919 - 929
  • [7] Data-driven optimal tracking control for SMA actuated systems with prescribed performance via reinforcement learning
    Liu, Hongshuai
    Cheng, Qiang
    Xiao, Jichun
    Hao, Lina
    [J]. MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2022, 177
  • [8] Data-driven constrained reinforcement learning for optimal control of a multistage evaporation process
    Yao, Yao
    Ding, Jinliang
    Zhao, Chunhui
    Wang, Yonggang
    Chai, Tianyou
    [J]. CONTROL ENGINEERING PRACTICE, 2022, 129
  • [9] Parameter Optimal Iterative Learning Control Design: from Model-based, Data-driven to Reinforcement Learning *
    Zhang, Yueqing
    Chu, Bing
    Shu, Zhan
    [J]. IFAC PAPERSONLINE, 2022, 55 (12): : 494 - 499
  • [10] Data-Driven Control and Learning Systems
    Hou, Zhongsheng
    Gao, Huijun
    Lewis, Frank L.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2017, 64 (05) : 4070 - 4075