Application of Deep Reinforcement Learning in Demand Response

被引:0
|
作者
Sun, Yi [1 ]
Liu, Di [1 ]
Li, Bin [1 ]
Xu, Yonghai [1 ]
机构
[1] School of Electrical and Electronic Engineering, North China Electric Power University, Beijing,102206, China
基金
中国国家自然科学基金;
关键词
Neural networks - Deep learning;
D O I
10.7500/AEPS20180110007
中图分类号
学科分类号
摘要
With the advancement of electricity market reform in China, the demand response business is developing towards diversification and normalization. The requirements for the reliability and accuracy of demand response in the new environment are getting higher and higher, and there is an urgent need for perfect technical support. Deep reinforcement learning can make more accurate identification of complex external environments and make optimal decisions, which can exactly meet the requirements of demand response. Based on this, the application of deep reinforcement learning technology in the demand response is discussed. Firstly, the development history and research status of deep reinforcement learning are presented. Meanwhile, the research status and future development requirements of demand response are analyzed. Then the feasibility and methods of deep reinforcement learning applied to demand response services are discussed. Finally, the demand responsive business development framework based on deep reinforcement learning is proposed, and an in-depth analysis of the implementation process of deep reinforcement learning is conducted, which provides a reference for the development of demand response technologies. © 2019 Automation of Electric Power Systems Press.
引用
收藏
页码:183 / 191
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Demand Response in Distribution Networks
    Bahrami, Shahab
    Chen, Yu Christine
    Wong, Vincent W. S.
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2021, 12 (02) : 1496 - 1506
  • [2] Application of deep reinforcement learning in electricity demand response market: Demand response decision-making of load aggregator
    Xu, Guangda
    Song, Shihang
    Li, Yu
    Lu, Yi
    Zhao, Yuan
    Zhang, Li
    Wang, Fukun
    Song, Zhiyu
    [J]. METHODSX, 2024, 12
  • [3] Adversarial Attack for Deep Reinforcement Learning Based Demand Response
    Wan, Zhiqiang
    Li, Hepeng
    Shuai, Hang
    Sun, Yan
    He, Haibo
    [J]. 2021 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2021,
  • [4] Deep Reinforcement Learning Method for Demand Response Management of Interruptible Load
    Wang, Biao
    Li, Yan
    Ming, Weiyu
    Wang, Shaorong
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (04) : 3146 - 3155
  • [5] Deep reinforcement learning with planning guardrails for building energy demand response
    Jang, Doseok
    Spangher, Lucas
    Nadarajah, Selvaprabu
    Spanos, Costas
    [J]. ENERGY AND AI, 2023, 11
  • [6] Demand Response Management for Industrial Facilities: A Deep Reinforcement Learning Approach
    Huang, Xuefei
    Hong, Seung Ho
    Yu, Mengmeng
    Ding, Yuemin
    Jiang, Junhui
    [J]. IEEE ACCESS, 2019, 7 : 82194 - 82205
  • [7] Fusing Knowledge and Deep Reinforcement Learning for Home Demand Response Optimization
    Su, Yongxin
    Zhang, Tao
    Tan, Mao
    Peng, Hanmei
    [J]. Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2023, 43 (05): : 1855 - 1866
  • [8] Modified deep learning and reinforcement learning for an incentive-based demand response model
    Wen, Lulu
    Zhou, Kaile
    Li, Jun
    Wang, Shanyong
    [J]. ENERGY, 2020, 205
  • [9] Online Optimization for Home Integrated Demand Response Based on Deep Reinforcement Learning
    Su, Yongxin
    Wu, Zexuan
    Tan, Mao
    Duan, Bin
    [J]. Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering, 2021, 41 (16): : 5581 - 5592
  • [10] Decision Optimization Model of Incentive Demand Response Based on Deep Reinforcement Learning
    Xu, Hongsheng
    Lu, Jixiang
    Yang, Zhihong
    Li, Yun
    Lu, Jinjun
    Huang, Hua
    [J]. Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2021, 45 (14): : 97 - 103