Coordinated energy management strategy for multi-energy hub with thermo-electrochemical effect based power-to-ammonia: A multi-agent deep reinforcement learning enabled approach

被引:8
|
作者
Xiong, Kang [1 ]
Hu, Weihao [1 ]
Cao, Di [1 ]
Li, Sichen [1 ]
Zhang, Guozhou [1 ]
Liu, Wen [2 ]
Huang, Qi [3 ]
Chen, Zhe [4 ]
机构
[1] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu, Peoples R China
[2] Univ Utrecht, Copernicus Inst Sustainable Dev, Princetonlaan 8a, NL-3584 CB Utrecht, Netherlands
[3] Southwest Univ Sci & Technol, Mianyang 621010, Peoples R China
[4] Aalborg Univ, Dept Energy Technol, Aalborg, Denmark
关键词
Power-to-ammonia; Renewable energy; Multi-energy hub; Multi-agent deep reinforcement learning; NITROGEN REDUCTION REACTION; OPTIMIZATION; SYSTEM; MODEL;
D O I
10.1016/j.renene.2023.05.067
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Power-to-ammonia (P2A) technology has attracted more and more attention since ammonia is recognized as a natural zero-carbon fuel. In this context, this paper constructs a renewable energy powered multi-energy hub (MEH) system which integrates with a thermo-electrochemical effect based P2A facility. Subsequently, the energy management of proposed MEH system is casted to a multi-agent coordinated optimization problem, which aims to minimize operating cost and carbon dioxide emissions while satisfying constraints. Then, a novel multiagent deep reinforcement learning method called CommNet is applied to solve this problem to obtain the optimal coordinated energy management strategy of each energy hub by achieving the distributed computation of global information. Finally, the simulation results show that the proposed method can achieve better performance on reducing operating cost and carbon emissions than other benchmark methods.
引用
收藏
页码:216 / 232
页数:17
相关论文
共 50 条
  • [31] Distributed multi-agent based coordinated power management and control strategy for microgrids with distributed energy resources
    Rahman, M. S.
    Oo, A. M. T.
    ENERGY CONVERSION AND MANAGEMENT, 2017, 139 : 20 - 32
  • [32] Multi-Agent Reinforcement Learning for Smart Community Energy Management
    Wilk, Patrick
    Wang, Ning
    Li, Jie
    ENERGIES, 2024, 17 (20)
  • [33] Multi-agent microgrid energy management based on deep learning forecaster
    Afrasiabi, Mousa
    Mohammadi, Mohammad
    Rastegar, Mohammad
    Kargarian, Amin
    ENERGY, 2019, 186
  • [34] Multi-agent deep reinforcement learning for Smart building energy management with chance constraints
    Deng, Jingchuan
    Wang, Xinsheng
    Meng, Fangang
    ENERGY AND BUILDINGS, 2025, 331
  • [35] Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning
    Ajagekar, Akshay
    Decardi-Nelson, Benjamin
    You, Fengqi
    APPLIED ENERGY, 2024, 355
  • [36] Towards Pareto-optimal energy management in integrated energy systems: A multi-agent and multi-objective deep reinforcement learning approach
    Dou, Jiaming
    Wang, Xiaojun
    Liu, Zhao
    Sun, Qingkai
    Wang, Xihao
    He, Jinghan
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2024, 159
  • [37] Decentralized multi-agent based energy management of microgrid using reinforcement learning
    Samadi, Esmat
    Badri, Ali
    Ebrahimpour, Reza
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2020, 122
  • [38] Coordinated control of wind turbine and hybrid energy storage system based on multi-agent deep reinforcement learning for wind power smoothing
    Wang, Xin
    Zhou, Jianshu
    Qin, Bin
    Guo, Lingzhong
    JOURNAL OF ENERGY STORAGE, 2023, 57
  • [39] Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management
    Lu, Renzhi
    Li, Yi-Chang
    Li, Yuting
    Jiang, Junhui
    Ding, Yuemin
    APPLIED ENERGY, 2020, 276
  • [40] A safe reinforcement learning approach for multi-energy management of smart home
    Ding, Hongyuan
    Xu, Yan
    Hao, Benjamin Chew Si
    Li, Qiaoqiao
    Lentzakis, Antonis
    ELECTRIC POWER SYSTEMS RESEARCH, 2022, 210