Steady-State Error Compensation for Reinforcement Learning-Based Control of Power Electronic Systems

被引:4
|
作者
Weber, Daniel [1 ]
Schenke, Maximilian [1 ]
Wallscheid, Oliver [1 ]
机构
[1] Paderborn Univ, Dept Power Elect & Elect Drives, D-33098 Paderborn, Germany
关键词
~Control; disturbance rejection; power electronic systems; reference tracking; reinforcement learning; steady-state error; OPTIMAL TRACKING; CONVERTER; DESIGN;
D O I
10.1109/ACCESS.2023.3297274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data-driven approaches like reinforcement learning (RL) allow a model-free, self-adaptive controller design that enables a fast and largely automatic controller development process with minimum human effort. While it was already shown in various power electronic applications that the transient control behavior for complex systems can be sufficiently handled by RL, the challenge of non-vanishing steady-state control errors remains, which arises from the usage of control policy approximations and finite training times. This is a crucial problem in power electronic applications which require steady-state control accuracy, e.g., voltage control of grid-forming inverters or accurate current control in motor drives. To overcome this issue, an integral action state augmentation for RL controllers is introduced that mimics an integrating feedback and does not require any expert knowledge, leaving the approach model free. Therefore, the RL controller learns how to suppress steady-state control deviations more effectively. The benefit of the developed method both for reference tracking and disturbance rejection is validated for two voltage source inverter control tasks targeting islanded microgrid as well as traction drive applications. In comparison to a standard RL setup, the suggested extension allows to reduce the steady-state error by up to 52% within the considered validation scenarios.
引用
收藏
页码:76524 / 76536
页数:13
相关论文
共 50 条
  • [1] Steady-State Error Compensation for Reinforcement Learning with Quadratic Rewards
    Wang, Liyao
    Zheng, Zishun
    Lin, Yuan
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1608 - 1613
  • [2] Reinforcement Learning-Based Control of a Power Electronic Converter
    Alfred, Dajr
    Czarkowski, Dariusz
    Teng, Jiaxin
    MATHEMATICS, 2024, 12 (05)
  • [3] ON STEADY-STATE ERROR OF NONLINEAR CONTROL SYSTEMS
    MAEDA, H
    KODAMA, S
    SHIRAKAW.H
    ELECTRONICS & COMMUNICATIONS IN JAPAN, 1968, 51 (06): : 159 - &
  • [4] Reinforcement Learning-Based Predictive Control for Power Electronic Converters
    Wan, Yihao
    Xu, Qianwen
    Dragicevic, Tomislav
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024,
  • [5] Investigation of P and PD Controllers' Performance in Control Systems with Steady-State Error Compensation
    Levisauskas, D.
    Tekorius, T.
    ELEKTRONIKA IR ELEKTROTECHNIKA, 2012, 121 (05) : 63 - 68
  • [6] Reinforcement learning-based power control in mobile communications systems
    Gao, XZ
    Ovaska, SJ
    Vasilakos, AV
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2002, 8 (04): : 337 - 352
  • [7] Type Number Based Steady-State Error Analysis on Fractional Order Control Systems
    Pan, Jinwen
    Gao, Qing
    Qiu, Jianbin
    Wang, Yong
    ASIAN JOURNAL OF CONTROL, 2017, 19 (01) : 266 - 278
  • [8] Reinforcement learning in steady-state genetic algorithms
    Lee, CY
    Antonsson, EK
    CEC'02: PROCEEDINGS OF THE 2002 CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1 AND 2, 2002, : 1793 - 1797
  • [9] RUL prediction for AECs of power electronic systems based on machine learning and error compensation
    Sun, Quan
    Yang, Lichen
    Li, Hongsheng
    Sun, Guodong
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 44 (05) : 7407 - 7417
  • [10] Development of Self-Tuning Control System with Fuzzy Compensation of Steady-State Error
    Denisova, Liudmila
    Meshcheryakov, Vitalii
    2018 INTERNATIONAL CONFERENCE ON INDUSTRIAL ENGINEERING, APPLICATIONS AND MANUFACTURING (ICIEAM), 2018,