Reinforcement Learning for Optimal Primary Frequency Control: A Lyapunov Approach

被引:21
|
作者
Cui, Wenqi [1 ]
Jiang, Yan [1 ]
Zhang, Baosen [1 ]
机构
[1] Univ Washington, Dept Elect & Comp Engn, Seattle, WA 98195 USA
基金
美国国家科学基金会;
关键词
Frequency control; Power system stability; Synchronous generators; Power system dynamics; Generators; Costs; Nonlinear dynamical systems; primary frequency control; nonlinear systems; reinforcement learning; STABILITY; STORAGE; PERFORMANCE; SYSTEMS; INERTIA; MODEL;
D O I
10.1109/TPWRS.2022.3176525
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As more inverter-connected renewable resources are integrated into the grid, frequency stability may degrade because of the reduction in mechanical inertia and damping. A common approach to mitigate this degradation in performance is to use the power electronic interfaces of the renewable resources for primary frequency control. Since inverter-connected resources can realize almost arbitrary responses to frequency changes, they are not limited to reproducing the linear droop behaviors. To fully leverage their capabilities, reinforcement learning (RL) has emerged as a popular method to design nonlinear controllers to optimize a host of objective functions. Because both inverter-connected resources and synchronous generators would be a significant part of the grid in the near and intermediate future, the learned controller of the former should be stabilizing with respect to the nonlinear dynamics of the latter. To overcome this challenge, we explicitly engineer the structure of neural network-based controllers such that they guarantee system stability by construction, through the use of a Lyapunov function. A recurrent neural network architecture is used to efficiently train the controllers. The resulting controllers only use local information and outperform optimal linear droop as well as other state-of-the-art learning approaches.
引用
收藏
页码:1676 / 1688
页数:13
相关论文
共 50 条
  • [1] Inverse Reinforcement Learning: A Control Lyapunov Approach
    Tesfazgi, Samuel
    Lederer, Armin
    Hirche, Sandra
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 3627 - 3632
  • [2] A Lyapunov approach for stable reinforcement learning
    Clempner, Julio B.
    [J]. COMPUTATIONAL & APPLIED MATHEMATICS, 2022, 41 (06):
  • [3] A Lyapunov approach for stable reinforcement learning
    Julio B. Clempner
    [J]. Computational and Applied Mathematics, 2022, 41
  • [4] Stable Reinforcement Learning for Optimal Frequency Control: A Distributed Averaging-Based Integral Approach
    Jiang, Yan
    Cui, Wenqi
    Zhang, Baosen
    Cortes, Jorge
    [J]. IEEE Open Journal of Control Systems, 2022, 1 : 194 - 209
  • [5] Control Lyapunov-barrier function-based safe reinforcement learning for nonlinear optimal control
    Wang, Yujia
    Wu, Zhe
    [J]. AICHE JOURNAL, 2024, 70 (03)
  • [6] Damping control by fusion of reinforcement learning and control Lyapunov functions
    Glavic, Mevludin
    Ernst, Damien
    Wehenkel, Louis
    [J]. 2006 38TH ANNUAL NORTH AMERICAN POWER SYMPOSIUM, NAPS-2006 PROCEEDINGS, 2006, : 361 - +
  • [7] A Multiagent Reinforcement Learning Approach for Wind Farm Frequency Control
    Liang, Yanchang
    Zhao, Xiaowei
    Sun, Li
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 1725 - 1734
  • [8] Secondary Frequency Control of Microgrids: An Online Reinforcement Learning Approach
    Adibi, Mahya
    van der Woude, Jacob
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (09) : 4824 - 4831
  • [9] A Lyapunov-based Approach to Safe Reinforcement Learning
    Chow, Yinlam
    Nachum, Ofir
    Duenez-Guzman, Edgar
    Ghavamzadeh, Mohammad
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Linguistic Lyapunov reinforcement learning control for robotic manipulators
    Kumar, Abhishek
    Sharma, Rajneesh
    [J]. NEUROCOMPUTING, 2018, 272 : 84 - 95