A Deep Reinforcement Learning Optimization Method Considering Network Node Failures

被引:0
|
作者
Ding, Xueying [1 ]
Liao, Xiao [1 ]
Cui, Wei [1 ]
Meng, Xiangliang [1 ]
Liu, Ruosong [2 ]
Ye, Qingshan [2 ]
Li, Donghe [2 ]
机构
[1] State Grid Informat & Telecommun Grp Co Ltd, Beijing 100029, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Automat Sci & Engn, Xian 710049, Peoples R China
关键词
microgrid; topology; deep reinforcement; electric power safety;
D O I
10.3390/en17174471
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Nowadays, the microgrid system is characterized by a diversification of power factors and a complex network structure. Existing studies on microgrid fault diagnosis and troubleshooting mostly focus on the fault detection and operation optimization of a single power device. However, for increasingly complex microgrid systems, it becomes increasingly challenging to effectively contain faults within a specific spatiotemporal range. This can lead to the spread of power faults, posing great harm to the safety of the microgrid. The topology optimization of the microgrid based on deep reinforcement learning proposed in this paper starts from the overall power grid and aims to minimize the overall failure rate of the microgrid by optimizing the topology of the power grid. This approach can limit internal faults within a small range, greatly improving the safety and reliability of microgrid operation. The method proposed in this paper can optimize the network topology for the single node fault and multi-node fault, reducing the influence range of the node fault by 21% and 58%, respectively.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Deep Reinforcement Learning Optimization Method for Charging Control of Electric Vehicles
    Du, Mingqiu
    Li, Yan
    Wang, Biao
    Zhang, Yihan
    Luo, Pan
    Wang, Shaorong
    [J]. Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2019, 39 (14): : 4042 - 4048
  • [22] Deep Reinforcement Learning-Enabled Bridge Management Considering Asset and Network Risks
    Yang, David Y.
    [J]. JOURNAL OF INFRASTRUCTURE SYSTEMS, 2022, 28 (03)
  • [23] Graphic Design Optimization Method Based on Deep Reinforcement Learning Model
    Zhang, Jiwen
    [J]. APPLIED MATHEMATICS AND NONLINEAR SCIENCES, 2023,
  • [24] Stereoscopic Projection Policy Optimization Method Based on Deep Reinforcement Learning
    An, Jing
    Si, Guang-Ya
    Zhang, Lei
    Liu, Wei
    Zhang, Xue-Chao
    [J]. ELECTRONICS, 2022, 11 (23)
  • [25] Key node identification of satellite time-varying network with deep reinforcement learning
    Wang, Liyang
    Li, Yun
    Ren, Ye
    Yu, Lina
    [J]. 2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 572 - 579
  • [26] Network Planning with Deep Reinforcement Learning
    Zhu, Hang
    Gupta, Varun
    Ahuja, Satyajeet Singh
    Tian, Yuandong
    Zhang, Ying
    Jin, Xin
    [J]. SIGCOMM '21: PROCEEDINGS OF THE 2021 ACM SIGCOMM 2021 CONFERENCE, 2021, : 258 - 271
  • [27] An Optimization Method for Non-IID Federated Learning Based on Deep Reinforcement Learning
    Meng, Xutao
    Li, Yong
    Lu, Jianchao
    Ren, Xianglin
    [J]. SENSORS, 2023, 23 (22)
  • [28] Accelerating Deep Reinforcement Learning for Digital Twin Network Optimization with Evolutionary Strategies
    Guemes-Palau, Carlos
    Almasan, Paul
    Xiao, Shihan
    Cheng, Xiangle
    Shi, Xiang
    Barlet-Ros, Pere
    Cabellos-Aparicio, Albert
    [J]. PROCEEDINGS OF THE IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM 2022, 2022,
  • [29] Wireless Network Design Optimization for Computer Teaching with Deep Reinforcement Learning Application
    Luo, Yumei
    Zhang, Deyu
    [J]. APPLIED ARTIFICIAL INTELLIGENCE, 2023, 37 (01)
  • [30] DeepLS: Local Search for Network Optimization Based on Lightweight Deep Reinforcement Learning
    Di Cicco, Nicola
    Ibrahimi, Memedhe
    Troia, Sebastian
    Tornatore, Massimo
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (01): : 108 - 119