Adaptive multi-agent reinforcement learning for dynamic pricing and distributed energy management in virtual power plant networks

被引:0
|
作者
Yao, Jian-Dong [1 ]
Hao, Wen-Bin [1 ]
Meng, Zhi-Gao [1 ]
Xie, Bo [1 ]
Chen, Jian-Hua [1 ]
Wei, Jia-Qi [1 ]
机构
[1] State Grid Sichuan Electric Power Company Chengdu Power Supply Company, Chengdu,610041, China
关键词
D O I
10.1016/j.jnlest.2024.100290
中图分类号
学科分类号
摘要
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant (VPP) networks using multi-agent reinforcement learning (MARL). As the energy landscape evolves towards greater decentralization and renewable integration, traditional optimization methods struggle to address the inherent complexities and uncertainties. Our proposed MARL framework enables adaptive, decentralized decision-making for both the distribution system operator and individual VPPs, optimizing economic efficiency while maintaining grid stability. We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay. Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods, including Stackelberg game models and model predictive control, achieving an 18.73% reduction in costs and a 22.46% increase in VPP profits. The MARL framework shows particular strength in scenarios with high renewable energy penetration, where it improves system performance by 11.95% compared with traditional methods. Furthermore, our approach demonstrates superior adaptability to unexpected events and mis-predictions, highlighting its potential for real-world implementation. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [31] Distributed energy management of multi-area integrated energy system based on multi-agent deep reinforcement learning
    Ding, Lifu
    Cui, Youkai
    Yan, Gangfeng
    Huang, Yaojia
    Fan, Zhen
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 157
  • [32] A multi-agent system for energy management of distributed power sources
    Lagorse, Jeremy
    Paire, Damien
    Miraoui, Abdellatif
    RENEWABLE ENERGY, 2010, 35 (01) : 174 - 182
  • [33] DMADRL: A Distributed Multi-agent Deep Reinforcement Learning Algorithm for Cognitive Offloading in Dynamic MEC Networks
    Yi, Meng
    Yang, Peng
    Du, Miao
    Ma, Ruochen
    NEURAL PROCESSING LETTERS, 2022, 54 (05) : 4341 - 4373
  • [34] DMADRL: A Distributed Multi-agent Deep Reinforcement Learning Algorithm for Cognitive Offloading in Dynamic MEC Networks
    Meng Yi
    Peng Yang
    Miao Du
    Ruochen Ma
    Neural Processing Letters, 2022, 54 : 4341 - 4373
  • [35] Deep Reinforcement Learning for Multi-Agent Power Control in Heterogeneous Networks
    Zhang, Lin
    Liang, Ying-Chang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (04) : 2551 - 2564
  • [36] Multi-Agent Deep Reinforcement Learning based Power Control for Large Energy Harvesting Networks
    Sharma, Mohit K.
    Zappone, Alessio
    Debbah, Merouane
    Assaad, Mohamad
    17TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT 2019), 2019, : 163 - 169
  • [37] Safe Multi-Agent Deep Reinforcement Learning for Dynamic Virtual Network Allocation
    Suzuki, Akito
    Harada, Shigeaki
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [38] Cooperative Multi-Agent Deep Reinforcement Learning for Dynamic Virtual Network Allocation
    Suzuki, Akito
    Kawahara, Ryoichi
    Harada, Shigeaki
    30TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2021), 2021,
  • [39] A Distributed Multi-Agent Dynamic Area Coverage Algorithm Based on Reinforcement Learning
    Xiao, Jian
    Wang, Gang
    Zhang, Ying
    Cheng, Lei
    IEEE ACCESS, 2020, 8 : 33511 - 33521
  • [40] Multi-Agent Reinforcement Learning-Based Distributed Dynamic Spectrum Access
    Albinsaid, Hasan
    Singh, Keshav
    Biswas, Sudip
    Li, Chih-Peng
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) : 1174 - 1185