Adaptive multi-agent reinforcement learning for dynamic pricing and distributed energy management in virtual power plant networks

被引:0
|
作者
Yao, Jian-Dong [1 ]
Hao, Wen-Bin [1 ]
Meng, Zhi-Gao [1 ]
Xie, Bo [1 ]
Chen, Jian-Hua [1 ]
Wei, Jia-Qi [1 ]
机构
[1] State Grid Sichuan Electric Power Company Chengdu Power Supply Company, Chengdu,610041, China
关键词
D O I
10.1016/j.jnlest.2024.100290
中图分类号
学科分类号
摘要
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant (VPP) networks using multi-agent reinforcement learning (MARL). As the energy landscape evolves towards greater decentralization and renewable integration, traditional optimization methods struggle to address the inherent complexities and uncertainties. Our proposed MARL framework enables adaptive, decentralized decision-making for both the distribution system operator and individual VPPs, optimizing economic efficiency while maintaining grid stability. We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay. Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods, including Stackelberg game models and model predictive control, achieving an 18.73% reduction in costs and a 22.46% increase in VPP profits. The MARL framework shows particular strength in scenarios with high renewable energy penetration, where it improves system performance by 11.95% compared with traditional methods. Furthermore, our approach demonstrates superior adaptability to unexpected events and mis-predictions, highlighting its potential for real-world implementation. © 2024 The Authors
引用
收藏
相关论文
共 50 条
  • [21] Parallel and distributed multi-agent reinforcement learning
    Kaya, M
    Arslan, A
    PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, 2001, : 437 - 441
  • [22] Coding for Distributed Multi-Agent Reinforcement Learning
    Wang, Baoqian
    Xie, Junfei
    Atanasov, Nikolay
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 10625 - 10631
  • [23] Multi-Agent Deep Reinforcement Learning for Distributed Resource Management in Wirelessly Powered Communication Networks
    Hwang, Sangwon
    Kim, Hanjin
    Lee, Hoon
    Lee, Inkyu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (11) : 14055 - 14060
  • [24] Multi-Agent Reinforcement Learning for Smart Community Energy Management
    Wilk, Patrick
    Wang, Ning
    Li, Jie
    ENERGIES, 2024, 17 (20)
  • [25] Online Reinforcement Learning in Multi-Agent Systems for Distributed Energy Systems
    Menon, Bharat R.
    Menon, Sangeetha B.
    Srinivasan, Dipti
    Jain, Lakhmi
    2014 IEEE INNOVATIVE SMART GRID TECHNOLOGIES - ASIA (ISGT ASIA), 2014, : 791 - 796
  • [26] Adaptive and Dynamic Service Composition via Multi-agent reinforcement learning
    Wang, Hongbing
    Wu, Qin
    Chen, Xin
    Yu, Qi
    Zheng, Zibin
    Bouguettaya, Athman
    2014 IEEE 21ST INTERNATIONAL CONFERENCE ON WEB SERVICES (ICWS 2014), 2014, : 447 - 454
  • [27] Transactive Multi-Agent Reinforcement Learning for Distributed Energy Price Localization
    Spangher, Lucas
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 244 - 245
  • [28] Learning competitive pricing strategies by multi-agent reinforcement learning
    Kutschinski, E
    Uthmann, T
    Polani, D
    JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2003, 27 (11-12): : 2207 - 2218
  • [29] Multi-agent Deep Reinforcement Learning for Distributed Energy Management and Strategy Optimization of Microgrid Market
    Fang, Xiaohan
    Zhao, Qiang
    Wang, Jinkuan
    Han, Yinghua
    Li, Yuchun
    SUSTAINABLE CITIES AND SOCIETY, 2021, 74
  • [30] Multi-agent reinforcement learning with adaptive mimetism
    Yamaguchi, T
    Miura, M
    Yachida, M
    ETFA '96 - 1996 IEEE CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION, PROCEEDINGS, VOLS 1 AND 2, 1996, : 288 - 294