Deep reinforcement learning based multi-level dynamic reconfiguration for urban distribution network: a cloud-edge collaboration architecture

被引:7
|
作者
Jiang, Siyuan [1 ]
Gao, Hongjun [1 ]
Wang, Xiaohui [2 ]
Liu, Junyong [1 ]
Zuo, Kunyu [3 ]
机构
[1] Sichuan Univ, Coll Elect Engn, Chengdu 610065, Sichuan, Peoples R China
[2] China Elect Power Res Inst, Beijing 100192, Peoples R China
[3] Stevens Inst Technol, Elect & Comp Engn Dept, Hoboken, NJ 07030 USA
来源
GLOBAL ENERGY INTERCONNECTION-CHINA | 2023年 / 6卷 / 01期
基金
中国国家自然科学基金;
关键词
Cloud-edge collaboration architecture; Multi-agent deep reinforcement learning; Multi-level dynamic reconfiguration; Offline learning; Online learning; EFFICIENT;
D O I
10.1016/j.gloei.2023.02.001
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
With the construction of the power Internet of Things (IoT), communication between smart devices in urban distribution networks has been gradually moving towards high speed, high compatibility, and low latency, which provides reliable support for reconfiguration optimization in urban distribution networks. Thus, this study proposed a deep reinforcement learning based multi-level dynamic reconfiguration method for urban distribution networks in a cloud-edge collaboration architecture to obtain a real-time optimal multi-level dynamic reconfiguration solution. First, the multi-level dynamic reconfiguration method was discussed, which included feeder-, transformer-, and substation-levels. Subsequently, the multi-agent system was combined with the cloud-edge collaboration architecture to build a deep reinforcement learning model for multi-level dynamic reconfiguration in an urban distribution network. The cloud-edge collaboration architecture can effectively support the multi-agent system to conduct "centralized training and decentralized execution" operation modes and improve the learning efficiency of the model. Thereafter, for a multi-agent system, this study adopted a combination of offline and online learning to endow the model with the ability to realize automatic optimization and updation of the strategy. In the offline learning phase, a Q-learning-based multi-agent conservative Q-learning (MACQL) algorithm was proposed to stabilize the learning results and reduce the risk of the next online learning phase. In the online learning phase, a multi -agent deep deterministic policy gradient (MADDPG) algorithm based on policy gradients was proposed to explore the action space and update the experience pool. Finally, the effectiveness of the proposed method was verified through a simulation analysis of a real-world 445-node system.
引用
收藏
页码:1 / 14
页数:14
相关论文
共 50 条
  • [1] Deep reinforcement learning based multi-level dynamic reconfiguration for urban distribution network:a cloud-edge collaboration architecture
    Siyuan Jiang
    Hongjun Gao
    Xiaohui Wang
    Junyong Liu
    Kunyu Zuo
    [J]. Global Energy Interconnection, 2023, 6 (01) : 1 - 14
  • [2] A Cloud-Edge Collaboration Solution for Distribution Network Reconfiguration Using Multi-Agent Deep Reinforcement Learning
    Gao, Hongjun
    Wang, Renjun
    He, Shuaijia
    Wang, Lingfeng
    Liu, Junyong
    Chen, Zhe
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) : 3867 - 3879
  • [3] Time-Segmented Multi-Level Reconfiguration in Distribution Network: A Novel Cloud-Edge Collaboration Framework
    Gao, Hongjun
    Ma, Wang
    He, Shuaijia
    Wang, Lingfeng
    Liu, Junyong
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (04) : 3319 - 3322
  • [4] Multi-level dynamic reconfiguration and operation optimization method for an urban distribution network based on deep reinforcement learning
    Wang, Zihan
    Gao, Hongjun
    Gao, Yiwen
    Qing, Zhuyu
    Hu, Mingyang
    Liu, Junyong
    [J]. Dianli Xitong Baohu yu Kongzhi/Power System Protection and Control, 2022, 50 (24): : 60 - 70
  • [5] Multi-level Dynamic Reconfiguration Method for Urban Distribution Networks Based on Deep Learning Algorithm
    Jiang, Siyuan
    Gao, Hongjun
    Ma, Wang
    Wang, Renjun
    Shi, Cheng
    Liu, Junyong
    [J]. Gaodianya Jishu/High Voltage Engineering, 2024, 50 (04): : 1468 - 1477
  • [6] Cloud-Edge Collaboration-Based Distribution Network Reconfiguration for Voltage Preventive Control
    Yue, Dong
    He, Ziwei
    Dou, Chunxia
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (12) : 11542 - 11552
  • [7] Cloud-Edge Training Architecture for Sim-to-Real Deep Reinforcement Learning
    Cao, Hongpeng
    Theile, Mirco
    Wyrwal, Federico G.
    Caccamo, Marco
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9363 - 9370
  • [8] Cloud-edge collaboration task scheduling in cloud manufacturing: An attention-based deep reinforcement learning approach
    Chen, Zhen
    Zhang, Lin
    Wang, Xiaohan
    Wang, Kunyu
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 177
  • [9] A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration
    Xinyu Zhang
    Zhigang Hu
    Yang Liang
    Hui Xiao
    Aikun Xu
    Meiguang Zheng
    Chuan Sun
    [J]. Journal of Grid Computing, 2024, 22
  • [10] A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration
    Zhang, Xinyu
    Hu, Zhigang
    Liang, Yang
    Xiao, Hui
    Xu, Aikun
    Zheng, Meiguang
    Sun, Chuan
    [J]. JOURNAL OF GRID COMPUTING, 2024, 22 (01)