Risk-Aware Energy Scheduling for Edge Computing With Microgrid: A Multi-Agent Deep Reinforcement Learning Approach

被引:27
|
作者
Munir, Md Shirajum [1 ]
Abedin, Sarder Fakhrul [1 ,2 ]
Tran, Nguyen H. [3 ]
Han, Zhu [1 ,4 ]
Huh, Eui-Nam [1 ]
Hong, Choong Seon [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
[2] Mid Sweden Univ, Dept Informat Syst & Technol, S-85170 Sundsvall, Sweden
[3] Univ Sydney, Sch Comp Sci, Sydney, NSW 2006, Australia
[4] Univ Houston, Elect & Comp Engn Dept, Houston, TX 77004 USA
关键词
Microgrids; Energy consumption; Task analysis; Wireless networks; Renewable energy sources; Estimation; Uncertainty; Multi-access edge computing (MEC); microgrid; multi-agent deep reinforcement learning; conditional value-at-risk (CVaR); stochastic game; demand-response (DR); EMERGENCY DEMAND RESPONSE; SMALL-CELL NETWORKS; RESOURCE-ALLOCATION; MOBILE; OPTIMIZATION; MANAGEMENT; ALGORITHM; GAME;
D O I
10.1109/TNSM.2021.3049381
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, multi-access edge computing (MEC) is a key enabler for handling the massive expansion of Internet of Things (IoT) applications and services. However, energy consumption of a MEC network depends on volatile tasks that induces risk for energy demand estimations. As an energy supplier, a microgrid can facilitate seamless energy supply. However, the risk associated with energy supply is also increased due to unpredictable energy generation from renewable and non-renewable sources. Especially, the risk of energy shortfall is involved with uncertainties in both energy consumption and generation. In this article, we study a risk-aware energy scheduling problem for a microgrid-powered MEC network. First, we formulate an optimization problem considering the conditional value-at-risk (CVaR) measurement for both energy consumption and generation, where the objective is to minimize the expected residual of scheduled energy for the MEC networks and we show this problem is an NP-hard problem. Second, we analyze our formulated problem using a multi-agent stochastic game that ensures the joint policy Nash equilibrium, and show the convergence of the proposed model. Third, we derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous advantage actor-critic (A3C) algorithm with shared neural networks. This method mitigates the curse of dimensionality of the state space and chooses the best policy among the agents for the proposed problem. Finally, the experimental results establish a significant performance gain by considering CVaR for high accuracy energy scheduling of the proposed model than both the single and random agent models.
引用
收藏
页码:3476 / 3497
页数:22
相关论文
共 50 条
  • [1] Multi-agent Deep Reinforcement Learning for Microgrid Energy Scheduling
    Zuo, Zhiqiang
    Li, Zhi
    Wang, Yijing
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 6184 - 6189
  • [2] Multi-Agent Reinforcement Learning Approach for Residential Microgrid Energy Scheduling
    Fang, Xiaohan
    Wang, Jinkuan
    Song, Guanru
    Han, Yinghua
    Zhao, Qiang
    Cao, Zhiao
    [J]. ENERGIES, 2020, 13 (01)
  • [3] Multi-task scheduling in vehicular edge computing: a multi-agent reinforcement learning approach
    Zhao, Yiming
    Mo, Lei
    Liu, Ji
    [J]. CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2024,
  • [4] Power flow adjustment for smart microgrid based on edge computing and multi-agent deep reinforcement learning
    Pu, Tianjiao
    Wang, Xinying
    Cao, Yifan
    Liu, Zhicheng
    Qiu, Chao
    Qiao, Ji
    Zhang, Shuhua
    [J]. JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2021, 10 (01):
  • [5] Power flow adjustment for smart microgrid based on edge computing and multi-agent deep reinforcement learning
    Tianjiao Pu
    Xinying Wang
    Yifan Cao
    Zhicheng Liu
    Chao Qiu
    Ji Qiao
    Shuhua Zhang
    [J]. Journal of Cloud Computing, 10
  • [6] A Delay-Optimal Task Scheduling Strategy for Vehicle Edge Computing Based on the Multi-Agent Deep Reinforcement Learning Approach
    Nie, Xuefang
    Yan, Yunhui
    Zhou, Tianqing
    Chen, Xingbang
    Zhang, Dingding
    [J]. ELECTRONICS, 2023, 12 (07)
  • [7] A novel multi-agent reinforcement learning approach for job scheduling in Grid computing
    Wu, Jun
    Xu, Xin
    Zhang, Pengcheng
    Liu, Chunming
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2011, 27 (05): : 430 - 439
  • [8] Energy-Aware Multi-Server Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Naderializadeh, Navid
    Hashemi, Morteza
    [J]. CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 383 - 387
  • [9] Multi-agent deep reinforcement learning for online request scheduling in edge cooperation networks
    Zhang, Yaqiang
    Li, Ruyang
    Zhao, Yaqian
    Li, Rengang
    Wang, Yanwei
    Zhou, Zhangbing
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 141 : 258 - 268
  • [10] Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning
    Harrold, Daniel J. B.
    Cao, Jun
    Fan, Zhong
    [J]. APPLIED ENERGY, 2022, 318