Accelerated DRL Agent for Autonomous Voltage Control Using Asynchronous Advantage Actor-critic

被引:12
|
作者
Xu, Zhengyuan [1 ,4 ]
Zan, Yan [1 ]
Xu, Chunlei [2 ]
Li, Jin [3 ]
Shi, Di [1 ]
Wang, Zhiwei [1 ]
Zhang, Bei [1 ]
Duan, Jiajun [1 ]
机构
[1] GEIRI North Amer, San Jose, CA 95134 USA
[2] State Grid Jiangsu Elect Power Co, Nanjing, Peoples R China
[3] NARI Grp Corp, Nanjing, Peoples R China
[4] Univ Penn, Philadelphia, PA 19104 USA
关键词
Artificial Intelligence; Autonomous Voltage Control; Parallel Deep Reinforcement Learning; A3C; On-policy Learning;
D O I
10.1109/pesgm41954.2020.9281768
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
This paper presents a novel data-driven parallel framework for autonomous voltage control (AVC) of the power grid. The proposed framework employs a distributed Deep Reinforcement Learning algorithm named Asynchronous Advantage Actor-Critic (A3C) to regulate voltage profiles in a power grid. A well-trained accelerated agent is obtained in the proposed framework by employing multiple workers simultaneously and interacting with a power grid simulator repeatedly. With the proposed framework, multiple threads can run in parallel. A well-trained agent, which utilizes the parameters acquired by the joint training of multiple workers, is obtained and tested through a realistic Illinois 200-bus system with consideration of N-1 contingencies. The training and testing results show the significant speedup capability and excellent numerical stability of the proposed framework.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] An Asynchronous Advantage Actor-Critic Reinforcement Learning Method for Stock Selection and Portfolio Management
    Kang, Qinma
    Zhou, Huizhuo
    Kang, Yunfan
    PROCEEDINGS OF THE 2018 2ND INTERNATIONAL CONFERENCE ON BIG DATA RESEARCH (ICBDR 2018), 2018, : 141 - 145
  • [22] Enhancing Autonomous Driving Navigation Using Soft Actor-Critic
    Elallid, Badr Ben
    Benamar, Nabil
    Bagaa, Miloud
    Hadjadj-Aoul, Yassine
    FUTURE INTERNET, 2024, 16 (07)
  • [23] Workflow scheduling based on asynchronous advantage actor-critic algorithm in multi-cloud environment
    Tang, Xuhao
    Liu, Fagui
    Wang, Bin
    Xu, Dishi
    Jiang, Jun
    Wu, Qingbo
    Chen, C. L. Philip
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 258
  • [24] Resource allocation Algorithm of Service Function Chain Based on Asynchronous Advantage Actor-Critic Learning
    Tang Lun
    He Xiaoyu
    Wang Xiao
    Tan Qi
    Hu Yanjuan
    Chen Qianbin
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (06) : 1733 - 1741
  • [25] Design and application of adaptive PID controller based on asynchronous advantage actor-critic learning method
    Sun, Qifeng
    Du, Chengze
    Duan, Youxiang
    Ren, Hui
    Li, Hongqiang
    WIRELESS NETWORKS, 2021, 27 (05) : 3537 - 3547
  • [26] A new noise network and gradient parallelisation-based asynchronous advantage actor-critic algorithm
    Fei, Zhengshun
    Wang, Yanping
    Wang, Jinglong
    Liu, Kangling
    Huang, Bingqiang
    Tan, Ping
    IET CYBER-SYSTEMS AND ROBOTICS, 2022, 4 (03) : 175 - 188
  • [27] Local Advantage Actor-Critic for Robust Multi-Agent Deep Reinforcement Learning
    Xiao, Yuchen
    Lyu, Xueguang
    Amato, Christopher
    2021 INTERNATIONAL SYMPOSIUM ON MULTI-ROBOT AND MULTI-AGENT SYSTEMS (MRS), 2021, : 155 - 163
  • [28] An improved scheduling with advantage actor-critic for Storm workloads
    Dong, Gaoqiang
    Wang, Jia
    Wang, Mingjing
    Su, Tingting
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (10): : 13421 - 13433
  • [29] Optimal Elevator Group Control via Deep Asynchronous Actor-Critic Learning
    Wei, Qinglai
    Wang, Lingxiao
    Liu, Yu
    Polycarpou, Marios M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5245 - 5256
  • [30] Cooperative traffic signal control using Multi-step return and Off-policy Asynchronous Advantage Actor-Critic Graph algorithm
    Yang, Shantian
    Yang, Bo
    Wong, Hau-San
    Kang, Zhongfeng
    KNOWLEDGE-BASED SYSTEMS, 2019, 183