Novel Data-Driven decentralized coordination model for electric vehicle aggregator and energy hub entities in multi-energy system using an improved multi-agent DRL approach

被引:15
|
作者
Zhang, Bin [1 ]
Hu, Weihao [2 ]
Cao, Di [2 ]
Ghias, Amer M. Y. M. [3 ]
Chen, Zhe [1 ]
机构
[1] Aalborg Univ, Dept Energy Technol, DK-9220 Aalborg, Denmark
[2] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu 610000, Peoples R China
[3] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
关键词
Multi-energy system; Energy hub; Electric vehicle aggregator; Deep reinforcement learning; Multi-agent; MANAGEMENT; OPTIMIZATION; NETWORK;
D O I
10.1016/j.apenergy.2023.120902
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Energy hub (EH) is an independent entity that benefits to the efficiency, flexibility, and reliability of integrated energy systems (IESs). On the other hand, the rapid emerging of electric vehicles (EVs) drives the EV aggregator (EVAGG) as another independent entity to facilitate the electricity exchange with the grid. However, due to privacy consideration for different owners, it is challenging to investigate the optimal coordinated strategies for such interconnected entities only by exchanging the information of electrical energy. Besides, the existence of parameter uncertainties (load demands, EVs' charging behaviors, wind power and photovoltaic generation), continuous decision space, dynamic energy flows, and non-convex multi-objective function is difficult to solve. To this end, this paper proposes a novel model-free multi-agent deep reinforcement learning (MADRL)-based decentralized coordination model to minimize the energy costs of EH entities and maximize profits of EVAGGs. First, a long short-term memory (LSTM) module is used to capture the future trend of uncertainties. Then, the coordination problem is formulated as Markov games and solved by the attention enabled MADRL algorithm, where the EH or EVAGG entity is modeled as an adaptive agent. An attention mechanism makes each agent only focus on state information related to the reward. The proposed MADRL adopts the forms of offline centralized training to learn the optimal coordinated control strategy, and decentralized execution to enable agents' online decisions to only require local measurements. A safety network is employed to cope with equality constraints (demand-supply balance). Simulation results illustrate that the proposed method achieves similar results compared to the traditional model-based method with perfect knowledge of system models, and the computation performance is at least two orders of magnitudes shorter than the traditional method. The testing results of the proposed method are better than those of the Concurrent and other MADRL method, with 10.79%/3.06% lower energy cost and 17.11%/6.82% higher profits of aggregator. Besides, the electric equality constraint of the proposed method is only 0.25 MW averaged per day, which is a small and acceptable violation.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Data-driven optimal scheduling of multi-energy system virtual power plant (MEVPP) incorporating carbon capture system (CCS), electric vehicle flexibility, and clean energy marketer (CEM) strategy
    Alabi, Tobi Michael
    Lu, Lin
    Yang, Zaiyue
    [J]. APPLIED ENERGY, 2022, 314
  • [22] Collaborative optimization of multi-energy multi-microgrid system: A hierarchical trust-region multi-agent reinforcement learning approach
    Xu, Xuesong
    Xu, Kai
    Zeng, Ziyang
    Tang, Jiale
    He, Yuanxing
    Shi, Guangze
    Zhang, Tao
    [J]. APPLIED ENERGY, 2024, 375
  • [23] A Novel Energy Management System for Modified Zero Energy Buildings using Multi-Agent Systems
    Sharma, Sumedha
    Singh, Mukesh
    Prakash, Surya
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON SMART GRID AND SMART CITIES (ICSGSC), 2017, : 267 - 271
  • [24] Data-driven stochastic energy management of multi energy system using deep reinforcement learning
    Zhou, Yanting
    Ma, Zhongjing
    Zhang, Jinhui
    Zou, Suli
    [J]. ENERGY, 2022, 261
  • [25] AutoMoG 3D: Automated Data-Driven Model Generation of Multi-Energy Systems Using Hinging Hyperplanes
    Kaemper, Andreas
    Holtwerth, Alexander
    Leenders, Ludger
    Bardow, Andre
    [J]. FRONTIERS IN ENERGY RESEARCH, 2021, 9
  • [26] Coordinated energy management strategy for multi-energy hub with thermo-electrochemical effect based power-to-ammonia: A multi-agent deep reinforcement learning enabled approach
    Xiong, Kang
    Hu, Weihao
    Cao, Di
    Li, Sichen
    Zhang, Guozhou
    Liu, Wen
    Huang, Qi
    Chen, Zhe
    [J]. RENEWABLE ENERGY, 2023, 214 : 216 - 232
  • [27] Data-driven sustainable distributed energy resources? control based on multi-agent deep reinforcement learning
    Jendoubi, Imen
    Bouffard, Francois
    [J]. SUSTAINABLE ENERGY GRIDS & NETWORKS, 2022, 32
  • [28] A Data-Driven Packet Routing Algorithm for an Unmanned Aerial Vehicle Swarm: A Multi-Agent Reinforcement Learning Approach
    Qiu, Xiulin
    Xu, Lei
    Wang, Ping
    Yang, Yuwang
    Liao, Zhenqiang
    [J]. IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (10) : 2160 - 2164
  • [29] Data-driven prediction of building energy consumption using an adaptive multi-model fusion approach
    Lin, Penghui
    Zhang, Limao
    Zuo, Jian
    [J]. APPLIED SOFT COMPUTING, 2022, 129
  • [30] Peer-to-peer multi-energy sharing for home microgrids: An integration of data-driven and model-driven approaches
    Li, Longxi
    Zhang, Sen
    [J]. INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2021, 133