A Deep Reinforcement Learning Optimization Method Considering Network Node Failures

被引:0
|
作者
Ding, Xueying [1 ]
Liao, Xiao [1 ]
Cui, Wei [1 ]
Meng, Xiangliang [1 ]
Liu, Ruosong [2 ]
Ye, Qingshan [2 ]
Li, Donghe [2 ]
机构
[1] State Grid Informat & Telecommun Grp Co Ltd, Beijing 100029, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Automat Sci & Engn, Xian 710049, Peoples R China
关键词
microgrid; topology; deep reinforcement; electric power safety;
D O I
10.3390/en17174471
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Nowadays, the microgrid system is characterized by a diversification of power factors and a complex network structure. Existing studies on microgrid fault diagnosis and troubleshooting mostly focus on the fault detection and operation optimization of a single power device. However, for increasingly complex microgrid systems, it becomes increasingly challenging to effectively contain faults within a specific spatiotemporal range. This can lead to the spread of power faults, posing great harm to the safety of the microgrid. The topology optimization of the microgrid based on deep reinforcement learning proposed in this paper starts from the overall power grid and aims to minimize the overall failure rate of the microgrid by optimizing the topology of the power grid. This approach can limit internal faults within a small range, greatly improving the safety and reliability of microgrid operation. The method proposed in this paper can optimize the network topology for the single node fault and multi-node fault, reducing the influence range of the node fault by 21% and 58%, respectively.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] FPGA Placement Optimization with Deep Reinforcement Learning
    Zhang, Junpeng
    Deng, Fang
    Yang, Xudong
    [J]. 2021 2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND INTELLIGENT CONTROL (ICCEIC 2021), 2021, : 73 - 76
  • [42] Model Parallelism optimization with deep reinforcement learning
    Mirhoseini, Azalia
    [J]. 2018 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2018), 2018, : 855 - 855
  • [43] Deep Reinforcement Learning for RAN Optimization and Control
    Chen, Yu
    Chen, Jie
    Krishnamurthi, Ganesh
    Yang, Huijing
    Wang, Huahui
    Zhao, Wenjie
    [J]. 2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
  • [44] Optimization of Molecules via Deep Reinforcement Learning
    Zhenpeng Zhou
    Steven Kearnes
    Li Li
    Richard N. Zare
    Patrick Riley
    [J]. Scientific Reports, 9
  • [45] Deep Reinforcement Learning for Traffic Light Optimization
    Coskun, Mustafa
    Baggag, Abdelkader
    Chawla, Sanjay
    [J]. 2018 18TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2018, : 564 - 571
  • [46] Optimization of Molecules via Deep Reinforcement Learning
    Zhou, Zhenpeng
    Kearnes, Steven
    Li, Li
    Zare, Richard N.
    Riley, Patrick
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [47] Multi-level dynamic reconfiguration and operation optimization method for an urban distribution network based on deep reinforcement learning
    Wang, Zihan
    Gao, Hongjun
    Gao, Yiwen
    Qing, Zhuyu
    Hu, Mingyang
    Liu, Junyong
    [J]. Dianli Xitong Baohu yu Kongzhi/Power System Protection and Control, 2022, 50 (24): : 60 - 70
  • [48] Deep neural network pruning method based on sensitive layers and reinforcement learning
    Yang, Wenchuan
    Yu, Haoran
    Cui, Baojiang
    Sui, Runqi
    Gu, Tianyu
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 2) : 1897 - 1917
  • [49] A novel method of heterogeneous combat network disintegration based on deep reinforcement learning
    Chen, Libin
    Wang, Chen
    Zeng, Chengyi
    Wang, Luyao
    Liu, Hongfu
    Chen, Jing
    [J]. FRONTIERS IN PHYSICS, 2022, 10
  • [50] Radio Resource Allocation Method for Network Slicing using Deep Reinforcement Learning
    Abiko, Yu
    Saito, Takato
    Ikeda, Daizo
    Ohta, Ken
    Mizuno, Tadanori
    Mineno, Hiroshi
    [J]. 2020 34TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2020), 2020, : 420 - 425