Multiagent Bayesian Deep Reinforcement Learning for Microgrid Energy Management Under Communication Failures

被引:0
|
作者
Zhou, Hao [1 ]
Aral, Atakan [2 ]
Brandic, Ivona [3 ]
Erol-Kantarci, Melike [1 ]
机构
[1] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON K1N 6N5, Canada
[2] Univ Vienna, Fac Comp Sci, A-1090 Vienna, Austria
[3] Vienna Univ Technol, Fac Informat, A-1040 Vienna, Austria
来源
IEEE INTERNET OF THINGS JOURNAL | 2021年 / 9卷 / 14期
基金
加拿大自然科学与工程研究理事会; 奥地利科学基金会;
关键词
Collaborative multiagent; communication failure; deep Q-learning (DQN); energy management; microgrid; NETWORK; SYSTEM;
D O I
10.1109/JIOT.2021.3131719
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Microgrids (MGs) are important players for the future transactive energy systems where a number of intelligent Internet of Things (IoT) devices interact for energy management in the smart grid. Although there have been many works on MG energy management, most studies assume a perfect communication environment, where communication failures are not considered. In this article, we consider the MG as a multiagent environment with IoT devices in which AI agents exchange information with their peers for collaboration. However, the collaboration information may be lost due to communication failures or packet loss. Such events may affect the operation of the whole MG. To this end, we propose a multiagent Bayesian deep reinforcement learning (BA-DRL) method for MG energy management under communication failures. We first define a multiagent partially observable Markov decision process (MAPOMDP) to describe agents under communication failures, in which each agent can update its beliefs on the actions of its peers. Then, we apply a double deep Q-learning (DDQN) architecture for Q-value estimation in BA-DRL, and propose a belief-based correlated equilibrium for the joint-action selection of multiagent BA-DRL. Finally, the simulation results show that BA-DRL is robust to both power supply uncertainty and communication failure uncertainty. BA-DRL has 4.1% and 10.3% higher reward than Nash deep Q-learning (Nash-DQN) and alternating direction method of multipliers (ADMM), respectively, under 1% communication failure probability.
引用
收藏
页码:11685 / 11698
页数:14
相关论文
共 50 条
  • [1] Multiagent Bayesian Deep Reinforcement Learning for Microgrid Energy Management under Communication Failures
    Zhou, Hao
    Aral, Atakan
    Brandic, Ivona
    Erol-Kantarci, Melike
    [J]. IEEE Internet of Things Journal, 2022, 9 (14) : 11685 - 11698
  • [2] Multiagent Reinforcement Learning With Learning Automata for Microgrid Energy Management and Decision Optimization
    Fang, Xiaohan
    Wang, Jinkuan
    Yin, Chunhui
    Han, Yinghua
    Zhao, Qiang
    [J]. PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 779 - 784
  • [3] Distributed quantum multiagent deep meta reinforcement learning for area autonomy energy management of a multiarea microgrid
    Li, Jiawen
    Tao, Zhou
    He, Keke
    Yu, Hengwen
    Du, Hongwei
    Liu, Shuangyu
    Cui, Haoyang
    [J]. APPLIED ENERGY, 2023, 343
  • [4] Deep reinforcement learning for energy management in a microgrid with flexible demand
    Nakabi, Taha Abdelhalim
    Toivanen, Pekka
    [J]. SUSTAINABLE ENERGY GRIDS & NETWORKS, 2021, 25
  • [5] Reinforcement learning for microgrid energy management
    Kuznetsova, Elizaveta
    Li, Yan-Fu
    Ruiz, Carlos
    Zio, Enrico
    Ault, Graham
    Bell, Keith
    [J]. ENERGY, 2013, 59 : 133 - 146
  • [6] Online Microgrid Energy Management Based on Safe Deep Reinforcement Learning
    Li, Hepeng
    Wang, Zhenhua
    Li, Lusi
    He, Haibo
    [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [7] Energy Management System by Deep Reinforcement Learning Approach in a Building Microgrid
    Dini, Mohsen
    Ossart, Florence
    [J]. ELECTRIMACS 2022, VOL 2, 2024, 1164 : 257 - 269
  • [8] Energy Optimization Management of Multi-microgrid using Deep Reinforcement Learning
    Zhang, Tingjun
    Yue, Dong
    Zhao, Nan
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4049 - 4053
  • [9] Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning
    Ji, Ying
    Wang, Jianhui
    Xu, Jiacan
    Fang, Xiaoke
    Zhang, Huaguang
    [J]. ENERGIES, 2019, 12 (12)
  • [10] Microgrid energy management using deep Q-network reinforcement learning
    Alabdullah, Mohammed H.
    Abido, Mohammad A.
    [J]. ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) : 9069 - 9078