Multi-agent reinforcement learning for multi-area power exchange

被引:0
|
作者
Xi, Jiachen [1 ]
Garcia, Alfredo [1 ]
Chen, Yu Christine [2 ]
Khatami, Roohallah [3 ]
机构
[1] Texas A&M Univ, Dept Ind & Syst Engn, College Stn, TX 77840 USA
[2] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
[3] Southern Illinois Univ, Sch Elect Comp & Biomed Engn, Carbondale, IL USA
关键词
Power system; Reinforcement learning; Uncertainty; Decentralized algorithm; Actor-critic algorithm; MODEL; LOAD;
D O I
10.1016/j.epsr.2024.110711
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Increasing renewable integration leads to faster and more frequent fluctuations in the power system net-load (load minus non-dispatchable renewable generation) along with greater uncertainty in its forecast. These can exacerbate the computational burden of centralized power system optimization (or market clearing) that accounts for variability and uncertainty in net load. Another layer of complexity pertains to estimating accurate models of spatio-temporal net-load uncertainty. Taken together, decentralized approaches for learning to optimize (or to clear a market) using only local information are compelling to explore. This paper develops a decentralized multi-agent reinforcement learning (MARL) approach that seeks to learn optimal policies for operating interconnected power systems under uncertainty. The proposed method incurs less computational and communication burden compared to a centralized stochastic programming approach and offers improved privacy preservation. Numerical simulations involving a three-area test system yield desirable results, with the resulting average net operation costs being less than 5% away from those obtained in a benchmark centralized model predictive control solution.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Learning to Share in Multi-Agent Reinforcement Learning
    Yi, Yuxuan
    Li, Ge
    Wang, Yaowei
    Lu, Zongqing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning
    Malysheva, Aleksandra
    Kudenko, Daniel
    Shpilman, Aleksei
    2019 XVI INTERNATIONAL SYMPOSIUM PROBLEMS OF REDUNDANCY IN INFORMATION AND CONTROL SYSTEMS (REDUNDANCY), 2019, : 171 - 176
  • [23] Evolutionary Multi-Agent Deep Meta Reinforcement Learning Method for Swarm Intelligence Energy Management of Isolated Multi-Area Microgrid With Internet of Things
    Li, Jiawen
    Zhou, Tao
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (14) : 12923 - 12937
  • [24] Distributed energy management of multi-area integrated energy system based on multi-agent deep reinforcement learning (vol 157, 109867, 2024)
    Ding, Lifu
    Cui, Youkai
    Yan, Gangfeng
    Huang, Yaojia
    Fan, Zhen
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 158
  • [25] Deep Reinforcement Learning for Multi-Agent Power Control in Heterogeneous Networks
    Zhang, Lin
    Liang, Ying-Chang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (04) : 2551 - 2564
  • [26] Multi-Agent Reinforcement Learning for Multi-Object Tracking
    Rosello, Pol
    Kochenderfer, Mykel J.
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 1397 - 1404
  • [27] Efficient multi-agent reinforcement learning HVAC power consumption optimization
    Miao, Chenyang
    Cui, Yunduan
    Li, Huiyun
    Wu, Xinyu
    ENERGY REPORTS, 2024, 12 : 5420 - 5431
  • [28] Multi-agent reinforcement learning for character control
    Li, Cheng
    Fussell, Levi
    Komura, Taku
    VISUAL COMPUTER, 2021, 37 (12): : 3115 - 3123
  • [29] Parallel and distributed multi-agent reinforcement learning
    Kaya, M
    Arslan, A
    PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, 2001, : 437 - 441
  • [30] Coding for Distributed Multi-Agent Reinforcement Learning
    Wang, Baoqian
    Xie, Junfei
    Atanasov, Nikolay
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 10625 - 10631