Safe Deep Reinforcement Learning for Microgrid Energy Management in Distribution Networks With Leveraged SpatialTemporal Perception

被引:23
|
作者
Ye, Yujian [1 ]
Wang, Hongru [2 ]
Chen, Peiling [1 ]
Tang, Yi [1 ]
Strbac, Goran [3 ]
机构
[1] Southeast Univ, Sch Elect Engn, Nanjing 210096, Peoples R China
[2] Southeast Univ, Sch Cyber Sci & Engn, Nanjing 210096, Peoples R China
[3] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2AZ, England
基金
中国国家自然科学基金;
关键词
Energy management; Optimization; Uncertainty; Reactive power; Distribution networks; HVAC; Process control; microgrids; network constraints; power system spatial-temporal perception; safe deep reinforcement learning;
D O I
10.1109/TSG.2023.3243170
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Microgrids (MG) have recently attracted great interest as an effective solution to the challenging problem of distributed energy resources ’ management in distribution networks. In this context, despite deep reinforcement learning (DRL) constitutes a well-suited model-free and data-driven methodological framework, its application to MG energy management is still challenging, driven by their limitations on environment status perception and constraint satisfaction. In this paper, the MG energy management problem is formalized as a Constrained Markov Decision Process, and is solved with the state-of-the-art interior-point policy optimization (IPO) method. In contrast to conventional DRL approaches, IPO facilitates efficient learning in multi-dimensional, continuous state and action spaces, while promising satisfaction of complex network constraints of the distribution network. The generalization capability of IPO is further enhanced through the extraction of spatial-temporal correlation features from original MG operating status, combining the strength of edge conditioned convolutional network and long short-term memory network. Case studies based on an IEEE 15-bus and 123-bus test feeders with real-world data demonstrate the superior performance of the proposed method in improving MG cost effectiveness, safeguarding the secure operation of the network and uncertainty adaptability, through performance benchmarking against model-based and DRL-based baseline methods. Finally, case studies also analyze the computational and scalability performance of proposed and baseline methods.
引用
收藏
页码:3759 / 3775
页数:17
相关论文
共 50 条
  • [1] Online Microgrid Energy Management Based on Safe Deep Reinforcement Learning
    Li, Hepeng
    Wang, Zhenhua
    Li, Lusi
    He, Haibo
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [2] Learning to Operate Distribution Networks With Safe Deep Reinforcement Learning
    Li, Hepeng
    He, Haibo
    IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (03) : 1860 - 1872
  • [3] Safe deep reinforcement learning for building energy management
    Wang, Xiangwei
    Wang, Peng
    Huang, Renke
    Zhu, Xiuli
    Arroyo, Javier
    Li, Ning
    APPLIED ENERGY, 2025, 377
  • [4] Lyapunov-Based Safe Reinforcement Learning for Microgrid Energy Management
    Hao, Guokai
    Li, Yuanzheng
    Li, Yang
    Jiang, Lin
    Zeng, Zhigang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [5] Deep reinforcement learning for energy management in a microgrid with flexible demand
    Nakabi, Taha Abdelhalim
    Toivanen, Pekka
    SUSTAINABLE ENERGY GRIDS & NETWORKS, 2021, 25
  • [6] Resilient Distribution Networks by Microgrid Formation Using Deep Reinforcement Learning
    Huang, Yuxiong
    Li, Gengfeng
    Chen, Chen
    Bian, Yiheng
    Qian, Tao
    Bie, Zhaohong
    IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (06) : 4918 - 4930
  • [7] Reinforcement learning for microgrid energy management
    Kuznetsova, Elizaveta
    Li, Yan-Fu
    Ruiz, Carlos
    Zio, Enrico
    Ault, Graham
    Bell, Keith
    ENERGY, 2013, 59 : 133 - 146
  • [8] Energy Management System by Deep Reinforcement Learning Approach in a Building Microgrid
    Dini, Mohsen
    Ossart, Florence
    ELECTRIMACS 2022, VOL 2, 2024, 1164 : 257 - 269
  • [9] Energy Optimization Management of Multi-microgrid using Deep Reinforcement Learning
    Zhang, Tingjun
    Yue, Dong
    Zhao, Nan
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4049 - 4053
  • [10] Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning
    Ji, Ying
    Wang, Jianhui
    Xu, Jiacan
    Fang, Xiaoke
    Zhang, Huaguang
    ENERGIES, 2019, 12 (12)