Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems

被引:4
|
作者
Weber, Daniel [1 ]
Schenke, Maximilian [1 ]
Wallscheid, Oliver [1 ]
机构
[1] Paderborn Univ, Dept Power Elect & Elect Drives, D-33098 Paderborn, Germany
关键词
~Control; disturbance rejection; power electronic systems; reference tracking; reinforcement learning; steady-state error; OPTIMAL TRACKING; CONVERTER; DESIGN;
D O I
10.1109/ACCESS.2023.3297274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven approaches like reinforcement learning (RL) allow a model-free, self-adaptive controller design that enables a fast and largely automatic controller development process with minimum human effort. While it was already shown in various power electronic applications that the transient control behavior for complex systems can be sufficiently handled by RL, the challenge of non-vanishing steady-state control errors remains, which arises from the usage of control policy approximations and finite training times. This is a crucial problem in power electronic applications which require steady-state control accuracy, e.g., voltage control of grid-forming inverters or accurate current control in motor drives. To overcome this issue, an integral action state augmentation for RL controllers is introduced that mimics an integrating feedback and does not require any expert knowledge, leaving the approach model free. Therefore, the RL controller learns how to suppress steady-state control deviations more effectively. The benefit of the developed method both for reference tracking and disturbance rejection is validated for two voltage source inverter control tasks targeting islanded microgrid as well as traction drive applications. In comparison to a standard RL setup, the suggested extension allows to reduce the steady-state error by up to 52% within the considered validation scenarios.
引用
收藏
页码:76524 / 76536
页数:13
相关论文
共 50 条
  • [41] Reinforcement Learning-Based Control for a Class of Nonlinear Systems with unknown control directions
    Song, Xiaoling
    Huang, Miao
    Wen, Gang
    Ma, Longhua
    Yao, Jiaqing
    Lu, Zheming
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 2519 - 2524
  • [42] Reinforcement Learning-Based Tracking Control for Networked Control Systems With DoS Attacks
    Liu, Jinliang
    Dong, Yanhui
    Zha, Lijuan
    Xie, Xiangpeng
    Tian, Engang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4188 - 4197
  • [43] The error of the quasi steady-state approximation in spatially distributed systems
    Yannacopoulos, AN
    Tomlin, AS
    Brindley, J
    Merkin, JH
    Pilling, MJ
    CHEMICAL PHYSICS LETTERS, 1996, 248 (1-2) : 63 - 70
  • [44] STEADY-STATE SENSITIVITY ERROR COEFFICIENTS IN MULTIVARIABLE LINEAR SYSTEMS
    SIMES, JG
    ETZWEILER, GA
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1968, AC13 (06) : 743 - +
  • [45] Technique for simulating the steady-state response of power electronic converters
    Naidu, S. R.
    Fernandes, D. A.
    IET POWER ELECTRONICS, 2011, 4 (03) : 269 - 277
  • [46] Hierarchical Reinforcement Learning-based Supervisory Control of Unknown Nonlinear Systems
    Makumi, Wanjiku A.
    Greene, Max L.
    Bell, Zachary I.
    Nivison, Scott
    Kamalapurkar, Rushikesh
    Dixon, Warren E.
    IFAC PAPERSONLINE, 2023, 56 (02): : 6871 - 6876
  • [47] Model-free reinforcement learning-based transient power control of vehicle fuel cell systems
    Zhang, Yahui
    Li, Ganxin
    Tian, Yang
    Wang, Zhong
    Liu, Jinfa
    Gao, Jinwu
    Jiao, Xiaohong
    Wen, Guilin
    APPLIED ENERGY, 2025, 388
  • [48] Reinforcement learning-based adaptive production control of pull manufacturing systems
    Xanthopoulos, A. S.
    Chnitidis, G.
    Koulouriotis, D. E.
    JOURNAL OF INDUSTRIAL AND PRODUCTION ENGINEERING, 2019, 36 (05) : 313 - 323
  • [49] Automated synthesis of steady-state continuous processes using reinforcement learning
    Quirin Göttl
    Dominik G. Grimm
    Jakob Burger
    Frontiers of Chemical Science and Engineering, 2022, 16 : 288 - 302
  • [50] Automated synthesis of steady-state continuous processes using reinforcement learning
    Quirin Gttl
    Dominik GGrimm
    Jakob Burger
    Frontiers of Chemical Science and Engineering, 2022, 16 (02) : 288 - 302