Deep reinforcement learning based multi-level dynamic reconfiguration for urban distribution network:a cloud-edge collaboration architecture

被引:1
|
作者
Siyuan Jiang [1 ]
Hongjun Gao [1 ]
Xiaohui Wang [2 ]
Junyong Liu [1 ]
Kunyu Zuo [3 ]
机构
[1] College of Electrical Engineering, Sichuan University
[2] China Electric Power Research Institute
[3] Electrical and Computer Engineering Department, Stevens Institute of Technology
基金
中国国家自然科学基金;
关键词
Cloud-edge collaboration architecture; Multiagent deep reinforcement learning; Multi-level dynamic reconfiguration; Offline learning; Online learning;
D O I
暂无
中图分类号
TP18 [人工智能理论]; TM73 [电力系统的调度、管理、通信];
学科分类号
080802 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the construction of the power Internet of Things(IoT), communication between smart devices in urban distribution networks has been gradually moving towards high speed, high compatibility, and low latency, which provides reliable support for reconfiguration optimization in urban distribution networks. Thus, this study proposed a deep reinforcement learning based multi-level dynamic reconfiguration method for urban distribution networks in a cloud-edge collaboration architecture to obtain a real-time optimal multi-level dynamic reconfiguration solution. First, the multi-level dynamic reconfiguration method was discussed, which included feeder-, transformer-, and substation-levels. Subsequently,the multi-agent system was combined with the cloud-edge collaboration architecture to build a deep reinforcement learning model for multi-level dynamic reconfiguration in an urban distribution network. The cloud-edge collaboration architecture can effectively support the multi-agent system to conduct “centralized training and decentralized execution” operation modes and improve the learning efficiency of the model. Thereafter, for a multi-agent system, this study adopted a combination of offline and online learning to endow the model with the ability to realize automatic optimization and updation of the strategy.In the offline learning phase, a Q-learning-based multi-agent conservative Q-learning(MACQL) algorithm was proposed to stabilize the learning results and reduce the risk of the next online learning phase. In the online learning phase, a multiagent deep deterministic policy gradient(MADDPG) algorithm based on policy gradients was proposed to explore the action space and update the experience pool. Finally, the effectiveness of the proposed method was verified through a simulation analysis of a real-world 445-node system.
引用
收藏
页码:1 / 14
页数:14
相关论文
共 50 条
  • [1] Deep reinforcement learning based multi-level dynamic reconfiguration for urban distribution network: a cloud-edge collaboration architecture
    Jiang, Siyuan
    Gao, Hongjun
    Wang, Xiaohui
    Liu, Junyong
    Zuo, Kunyu
    [J]. GLOBAL ENERGY INTERCONNECTION-CHINA, 2023, 6 (01): : 1 - 14
  • [2] A Cloud-Edge Collaboration Solution for Distribution Network Reconfiguration Using Multi-Agent Deep Reinforcement Learning
    Gao, Hongjun
    Wang, Renjun
    He, Shuaijia
    Wang, Lingfeng
    Liu, Junyong
    Chen, Zhe
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) : 3867 - 3879
  • [3] Time-Segmented Multi-Level Reconfiguration in Distribution Network: A Novel Cloud-Edge Collaboration Framework
    Gao, Hongjun
    Ma, Wang
    He, Shuaijia
    Wang, Lingfeng
    Liu, Junyong
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (04) : 3319 - 3322
  • [4] Multi-level dynamic reconfiguration and operation optimization method for an urban distribution network based on deep reinforcement learning
    Wang, Zihan
    Gao, Hongjun
    Gao, Yiwen
    Qing, Zhuyu
    Hu, Mingyang
    Liu, Junyong
    [J]. Dianli Xitong Baohu yu Kongzhi/Power System Protection and Control, 2022, 50 (24): : 60 - 70
  • [5] Multi-level Dynamic Reconfiguration Method for Urban Distribution Networks Based on Deep Learning Algorithm
    Jiang, Siyuan
    Gao, Hongjun
    Ma, Wang
    Wang, Renjun
    Shi, Cheng
    Liu, Junyong
    [J]. Gaodianya Jishu/High Voltage Engineering, 2024, 50 (04): : 1468 - 1477
  • [6] Cloud-Edge Collaboration-Based Distribution Network Reconfiguration for Voltage Preventive Control
    Yue, Dong
    He, Ziwei
    Dou, Chunxia
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (12) : 11542 - 11552
  • [7] Cloud-Edge Training Architecture for Sim-to-Real Deep Reinforcement Learning
    Cao, Hongpeng
    Theile, Mirco
    Wyrwal, Federico G.
    Caccamo, Marco
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9363 - 9370
  • [8] Cloud-edge collaboration task scheduling in cloud manufacturing: An attention-based deep reinforcement learning approach
    Chen, Zhen
    Zhang, Lin
    Wang, Xiaohan
    Wang, Kunyu
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 177
  • [9] A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration
    Xinyu Zhang
    Zhigang Hu
    Yang Liang
    Hui Xiao
    Aikun Xu
    Meiguang Zheng
    Chuan Sun
    [J]. Journal of Grid Computing, 2024, 22
  • [10] A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration
    Zhang, Xinyu
    Hu, Zhigang
    Liang, Yang
    Xiao, Hui
    Xu, Aikun
    Zheng, Meiguang
    Sun, Chuan
    [J]. JOURNAL OF GRID COMPUTING, 2024, 22 (01)