Optimal Dispatch of Integrated Electricity-gas System With Soft Actor-critic Deep Reinforcement Learning

被引:0
|
作者
Qiao, Ji [1 ]
Wang, Xinying [1 ]
Zhang, Qing [2 ]
Zhang, Dongxia [1 ]
Pu, Tianjiao [1 ]
机构
[1] China Electric Power Research Institute, Haidian District, Beijing,100192, China
[2] School of Electrical and Electronics Engineering, North China Electric Power University, Changping District, Beijing,102206, China
关键词
Gases - Stochastic systems - Wind power - Electric load dispatching - Deep learning - Learning systems;
D O I
暂无
中图分类号
学科分类号
摘要
Optimal dispatching of multi-energy flow is one of the core technologies to realize the efficient operation of integrated energy system. In this paper, a reinforcement learning based on the framework of soft actor-critic was proposed for optimizing the operation of integrated electricity-gas energy system. The agent adaptively learned the control strategies through its interaction with the power system. This method is able to take continuous control actions of the multi-energy flow system and flexibly deal with the complicated stochastic problem with uncertain wind power, photovoltaic power and loads. Thus the stochastic dispatching of integrated electricity-gas energy system can be implemented. First, the framework of the reinforcement learning for optimal dispatching was built and the methodology of the soft actor-critic was introduced. Then the interactive environment for the agent was built. The action and state space, reward approach, neural network structure and training process were designed. Finally, the results calculated by the proposed method were analyzed in two different integrated electricity-gas energy systems. © 2021 Chin. Soc. for Elec. Eng.
引用
收藏
页码:819 / 832
相关论文
共 50 条
  • [21] Optimal Scheduling of Regional Integrated Energy System Based on Advantage Learning Soft Actor-critic Algorithm and Transfer Learning
    Luo, Wenjian
    Zhang, Jing
    He, Yu
    Gu, Tingyun
    Nie, Xianglun
    Fan, Luqin
    Yuan, Xufeng
    Li, Bowen
    [J]. Dianwang Jishu/Power System Technology, 2023, 47 (04): : 1601 - 1611
  • [22] Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning
    Zhu, Hanlin
    Rashidinejad, Paria
    Jiao, Jiantao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] Deep Reinforcement Learning in VizDoom via DQN and Actor-Critic Agents
    Bakhanova, Maria
    Makarov, Ilya
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2021, PT I, 2021, 12861 : 138 - 150
  • [24] Lexicographic Actor-Critic Deep Reinforcement Learning for Urban Autonomous Driving
    Zhang, Hengrui
    Lin, Youfang
    Han, Sheng
    Lv, Kai
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (04) : 4308 - 4319
  • [25] Actor-Critic based Improper Reinforcement Learning
    Zaki, Mohammadi
    Mohan, Avinash
    Gopalan, Aditya
    Mannor, Shie
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [26] Curious Hierarchical Actor-Critic Reinforcement Learning
    Roeder, Frank
    Eppe, Manfred
    Nguyen, Phuong D. H.
    Wermter, Stefan
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 408 - 419
  • [27] A World Model for Actor-Critic in Reinforcement Learning
    Panov, A. I.
    Ugadiarov, L. A.
    [J]. PATTERN RECOGNITION AND IMAGE ANALYSIS, 2023, 33 (03) : 467 - 477
  • [28] Optimal Policy of Multiplayer Poker via Actor-Critic Reinforcement Learning
    Shi, Daming
    Guo, Xudong
    Liu, Yi
    Fan, Wenhui
    [J]. ENTROPY, 2022, 24 (06)
  • [29] A Deep Actor-Critic Reinforcement Learning Framework for Dynamic Multichannel Access
    Zhong, Chen
    Lu, Ziyang
    Gursoy, M. Cenk
    Velipasalar, Senem
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) : 1125 - 1139
  • [30] Fully distributed actor-critic architecture for multitask deep reinforcement learning
    Valcarcel Macua, Sergio
    Davies, Ian
    Tukiainen, Aleksi
    De Cote, Enrique Munoz
    [J]. KNOWLEDGE ENGINEERING REVIEW, 2021, 36