A Multiagent Reinforcement Learning Approach for Wind Farm Frequency Control

被引:11
|
作者
Liang, Yanchang [1 ]
Zhao, Xiaowei [1 ]
Sun, Li [2 ]
机构
[1] Univ Warwick, Sch Engn, Intelligent Control & Smart Energy Res Grp, Coventry CV4 7AL, England
[2] Harbin Inst Technol, Sch Mech Engn & Automat, Shenzhen 518055, Peoples R China
关键词
Frequency regulation; multiagent deep reinforcement learning (MADRL); wind farm; wind turbine machinery; INERTIA;
D O I
10.1109/TII.2022.3182328
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As wind turbines (WTs) become more prevalent, there is an increasing interest in actively controlling their power output to participate in the frequency regulation for the power grid. Conventional frequency regulation controllers use fixed gains, making it difficult for the WT to adjust its kinetic energy uptake to its operating conditions and to collaborate effectively with other WTs in the wind farm. In addition, the design of conventional frequency controllers does not consider their impacts on the mechanical structure. To address these issues, in this article, we model the cooperative frequency control problem for all the WTs in a wind farm as a decentralized partially observable Markov decision process and use a multiagent deep reinforcement learning algorithm to solve it. We also develop a grid-connected wind farm simulation model based on MATLAB/Simulink and OpenFAST, which can reflect the detailed interactions between the electrical and mechanical components of WTs. Simulation results show that the proposed strategy is effective in reducing frequency drops and has less impact on mechanical structure deflections compared with traditional methods.
引用
收藏
页码:1725 / 1734
页数:10
相关论文
共 50 条
  • [1] Exploring the application of reinforcement learning to wind farm control
    Korb, Henry
    Asmuth, Henrik
    Stender, Merten
    Ivanell, Stefan
    [J]. WAKE CONFERENCE 2021, 2021, 1934
  • [2] A Multiagent Reinforcement Learning Control Approach to Environment Exploration
    Imtiaz, Mohammad S.
    Wang, Jing
    [J]. SOUTHEASTCON 2017, 2017,
  • [3] A Distributed Reinforcement Learning Yaw Control Approach for Wind Farm Energy Capture Maximization
    Stanfel, Paul
    Johnson, Kathryn
    Bay, Christopher J.
    King, Jennifer
    [J]. 2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 4065 - 4070
  • [4] Wind farm control technologies: from classical control to reinforcement learning
    Dong, Hongyang
    Xie, Jingjie
    Zhao, Xiaowei
    [J]. PROGRESS IN ENERGY, 2022, 4 (03):
  • [5] Comparison of Deep Reinforcement Learning Techniques with Gradient based approach in Cooperative Control of Wind Farm
    Pujari, Keerthi NagaSree
    Srivastava, Vivek
    Miriyala, Srinivas Soumitri
    Mitra, Kishalay
    [J]. 2021 SEVENTH INDIAN CONTROL CONFERENCE (ICC), 2021, : 400 - 405
  • [6] A REINFORCEMENT LEARNING APPROACH FOR MULTIAGENT NAVIGATION
    Martinez-Gil, Francisco
    Barber, Fernando
    Lozano, Miguel
    Grimaldo, Francisco
    Fernandez, Fernando
    [J]. ICAART 2010: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1: ARTIFICIAL INTELLIGENCE, 2010, : 607 - 610
  • [7] Application of Reinforcement Learning to Wind Farm Active Power Control Design
    Zhang, Xuanhe
    Badihi, Hamed
    Yu, Ziquan
    Benbouzid, Mohamed
    Lu, Ningyun
    Zhang, Youmin
    [J]. 2022 IEEE ELECTRICAL POWER AND ENERGY CONFERENCE (EPEC), 2022, : 229 - 234
  • [8] Opportunities for multiagent systems and multiagent reinforcement learning in traffic control
    Ana L. C. Bazzan
    [J]. Autonomous Agents and Multi-Agent Systems, 2009, 18 : 342 - 375
  • [9] Opportunities for multiagent systems and multiagent reinforcement learning in traffic control
    Bazzan, Ana L. C.
    [J]. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2009, 18 (03) : 342 - 375
  • [10] Cooperative Wind Farm Control With Deep Reinforcement Learning and Knowledge-Assisted Learning
    Zhao, Huan
    Zhao, Junhua
    Qiu, Jing
    Liang, Gaoqi
    Dong, Zhao Yang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (11) : 6912 - 6921