Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems

被引:4
|
作者
Weber, Daniel [1 ]
Schenke, Maximilian [1 ]
Wallscheid, Oliver [1 ]
机构
[1] Paderborn Univ, Dept Power Elect & Elect Drives, D-33098 Paderborn, Germany
关键词
~Control; disturbance rejection; power electronic systems; reference tracking; reinforcement learning; steady-state error; OPTIMAL TRACKING; CONVERTER; DESIGN;
D O I
10.1109/ACCESS.2023.3297274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven approaches like reinforcement learning (RL) allow a model-free, self-adaptive controller design that enables a fast and largely automatic controller development process with minimum human effort. While it was already shown in various power electronic applications that the transient control behavior for complex systems can be sufficiently handled by RL, the challenge of non-vanishing steady-state control errors remains, which arises from the usage of control policy approximations and finite training times. This is a crucial problem in power electronic applications which require steady-state control accuracy, e.g., voltage control of grid-forming inverters or accurate current control in motor drives. To overcome this issue, an integral action state augmentation for RL controllers is introduced that mimics an integrating feedback and does not require any expert knowledge, leaving the approach model free. Therefore, the RL controller learns how to suppress steady-state control deviations more effectively. The benefit of the developed method both for reference tracking and disturbance rejection is validated for two voltage source inverter control tasks targeting islanded microgrid as well as traction drive applications. In comparison to a standard RL setup, the suggested extension allows to reduce the steady-state error by up to 52% within the considered validation scenarios.
引用
收藏
页码:76524 / 76536
页数:13
相关论文
共 50 条
  • [31] Optimal tracking control with zero steady-state error for linear systems with sinusoidal disturbances
    Zhang, SM
    Tang, GY
    DYNAMICS OF CONTINUOUS DISCRETE AND IMPULSIVE SYSTEMS-SERIES A-MATHEMATICAL ANALYSIS, 2006, 13 : 1471 - 1478
  • [32] Analyzing the steady-state error of nonunity feedback control systems by the concept of type number
    Mou, ShannChyi
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 690 - 694
  • [33] Steady-state error analysis of quasi sliding mode control for a class of nonlinear systems
    Li, Peng
    Ma, Jian-Jun
    Zheng, Zhi-Qiang
    Kongzhi yu Juece/Control and Decision, 2010, 25 (12): : 1896 - 1900
  • [34] QUANTIZER EFFECTS ON STEADY-STATE ERROR SPECIFICATIONS OF DIGITAL FEEDBACK-CONTROL SYSTEMS
    MILLER, RK
    MICHEL, AN
    FARRELL, JA
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1989, 34 (06) : 651 - 654
  • [35] Deep Reinforcement Learning-Based Optimal Control of DC Shipboard Power Systems for Pulsed Power Load Accommodation
    Tu, Zhenghong
    Zhang, Wei
    Liu, Wenxin
    IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (01) : 29 - 40
  • [36] LINE HIERARCHICAL CONTROL FOR STEADY-STATE SYSTEMS
    FINDEISEN, W
    BRDYS, M
    MALINOWSKI, K
    TATJEWSKI, P
    WOZNIAK, A
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1978, 23 (02) : 189 - 209
  • [37] Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey
    Al-Saadi, Mudhafar
    Al-Greer, Maher
    Short, Michael
    ENERGIES, 2023, 16 (04)
  • [38] HIERARCHICAL CONTROL FOR SYSTEMS OPERATING IN STEADY-STATE
    BRDYS, M
    FINDEISEN, W
    TATJEWSKI, P
    LARGE SCALE SYSTEMS IN INFORMATION AND DECISION TECHNOLOGIES, 1980, 1 (03): : 193 - 213
  • [40] Reinforcement Learning-Based Power Management Policy for Mobile Device Systems
    Kwon, Eunji
    Han, Sodam
    Park, Yoonho
    Yoon, Jongho
    Kang, Seokhyeong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (10) : 4156 - 4169