Smart Grid Optimization by Deep Reinforcement Learning over Discrete and Continuous Action Space

被引:0
|
作者
Sogabe, Tomah [1 ,2 ,3 ]
Malla, Dinesh Bahadur [2 ]
Takayama, Shota [2 ]
Shin, Seiichi [1 ]
Sakamoto, Katsuyoshi [2 ]
Yamaguchi, Koichi [2 ]
Singh, Thakur Praveen [3 ]
Sogabe, Masaru [3 ]
Hirata, Tomohiro [4 ]
Okada, Yoshitaka [4 ]
机构
[1] Univ Electrocommun, Info Powered Energy Syst Res Ctr, Chofu, Tokyo 1828585, Japan
[2] Univ Electrocommun, Dept Engn Sci, Chofu, Tokyo 1828585, Japan
[3] Grid Inc, Technol Solut Grp, Minato Ku, Tokyo 1070061, Japan
[4] Univ Tokyo, Res Ctr Adv Sci & Technol, Tokyo 1538904, Japan
关键词
D O I
暂无
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Energy optimization in smart grid has gradually shifted to agent-based machine learning method represented by the state of art deep learning and deep reinforcement learning. Especially deep neural network based reinforcement learning methods are emerging and gain popularity to for smart grid application. In this work, we have applied the applied two deep reinforcement learning algorithms designed for both discrete and continuous action space. These algorithms were well embedded in a rigorous physical model using Simscape Power SystemsTM (Matlab/Simulink (TM) Environment) for smart grid optimization. The results showed that the agent successfully captured the energy demand and supply feature in the training data and learnt to choose behavior leading to maximize its reward.
引用
收藏
页码:3794 / 3796
页数:3
相关论文
共 50 条
  • [21] A reinforcement learning with switching controllers for a continuous action space
    Nagayoshi, Masato
    Murao, Hajime
    Tamaki, Hisashi
    ARTIFICIAL LIFE AND ROBOTICS, 2010, 15 (01) : 97 - 100
  • [22] Reinforcement learning algorithm with CTRNN in continuous action space
    Arie, Hiroaki
    Namikawa, Jun
    Ogata, Tetsuya
    Tani, Jun
    Sugano, Shigeki
    NEURAL INFORMATION PROCESSING, PT 1, PROCEEDINGS, 2006, 4232 : 387 - 396
  • [23] Hierarchical Deep Reinforcement Learning for Continuous Action Control
    Yang, Zhaoyang
    Merrick, Kathryn
    Jin, Lianwen
    Abbass, Hussein A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (11) : 5174 - 5184
  • [24] Optimization of oxygen system scheduling in hybrid action space based on deep reinforcement learning
    Li, Lijuan
    Yang, Xue
    Yang, Shipin
    Xu, Xiaowei
    COMPUTERS & CHEMICAL ENGINEERING, 2023, 171
  • [25] Deep Reinforcement Learning with a Natural Language Action Space
    He, Ji
    Chen, Jianshu
    He, Xiaodong
    Gao, Jianfeng
    Li, Lihong
    Deng, Li
    Ostendorf, Mari
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 1621 - 1630
  • [26] Path Planning for Mobile Robot's Continuous Action Space Based on Deep Reinforcement Learning
    Yan, Tingxing
    Zhang, Yong
    Wang, Bin
    2018 INTERNATIONAL CONFERENCE ON BIG DATA AND ARTIFICIAL INTELLIGENCE (BDAI 2018), 2018, : 42 - 46
  • [27] Vision-based Navigation of UAV with Continuous Action Space Using Deep Reinforcement Learning
    Zhou, Benchun
    Wang, Weihong
    Liu, Zhenghua
    Wang, Jia
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 5030 - 5035
  • [28] Deep Reinforcement Learning for Smart Grid Operations: Algorithms, Applications, and Prospects
    Li, Yuanzheng
    Yu, Chaofan
    Shahidehpour, Mohammad
    Yang, Tao
    Zeng, Zhigang
    Chai, Tianyou
    PROCEEDINGS OF THE IEEE, 2023, 111 (09) : 1055 - 1096
  • [29] The distributed economic dispatch of smart grid based on deep reinforcement learning
    Fu, Yang
    Guo, Xiaoyan
    Mi, Yang
    Yuan, Minghan
    Ge, Xiaolin
    Su, Xiangjing
    Li, Zhenkun
    IET GENERATION TRANSMISSION & DISTRIBUTION, 2021, 15 (18) : 2645 - 2658
  • [30] RAN Slice Strategy Based on Deep Reinforcement Learning for Smart Grid
    Meng, Sachula
    Wang, Zhihui
    Ding, Huixia
    Wu, Sai
    Li, Xuan
    Zhao, Peng
    Zhu, Chunying
    Wang, Xue
    2019 COMPUTING, COMMUNICATIONS AND IOT APPLICATIONS (COMCOMAP), 2019, : 6 - 11