Economic Operation and Management of Microgrid System Using Deep Reinforcement Learning

被引:4
|
作者
Wu, Ling [1 ]
Zhang, Ji [1 ,2 ,3 ]
机构
[1] Coll Business, Luoyang Polytech, Luoyang 471000, Peoples R China
[2] HeNan Univ Sci & Technol, Sch Econ, Luoyang 471000, Peoples R China
[3] HeNan Univ Sci &Technol, Luoyang Financial Expert Comm, Luoyang Econ & Social Res Ctr, Sch Econ, Luoyang, Peoples R China
关键词
Boost Converter; DC Microgrids; Constant Power Load; Adaptive Nonlinear controller; Deep Reinforcement Learning; SLIDING-MODE CONTROL; BOOST CONVERTER; FEEDBACK;
D O I
10.1016/j.compeleceng.2022.107879
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The paper presents an adaptive Backstepping sliding mode control (BSMC) management method based on deep reinforcement learning to manage and stabilize DC/DC boost converter with constant power loads (CPLs) in Microgrids market. To perform the BSMC, the system's zero dynamic stability with diverse output functions has been presented via applying the input/output precise feedback linearization. The suggested layout has been modeled in Brunovsky's canonical model to solve the nonlinear issue resulting from the CPLs and the non-minimum phase problem. In this offered controller, the gains of the switching have been considered to being the adjustable controller coefficients that are chosen adaptively via the DRL method by online learning. This topology makes sure the rigid stability of the power electronic system by simultaneous adaptively tuning the gains. Eventually, the results prove the suggested control method owns stronger robustness efficiency and better dynamic regulation in comparison to the nonlinear control strategies.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Deep reinforcement learning for real-time economic energy management of microgrid system considering uncertainties
    Liu, Ding
    Zang, Chuanzhi
    Zeng, Peng
    Li, Wanting
    Wang, Xin
    Liu, Yuqi
    Xu, Shuqing
    FRONTIERS IN ENERGY RESEARCH, 2023, 11
  • [2] Energy Management System by Deep Reinforcement Learning Approach in a Building Microgrid
    Dini, Mohsen
    Ossart, Florence
    ELECTRIMACS 2022, VOL 2, 2024, 1164 : 257 - 269
  • [3] Optimal operation strategy of microgrid based on deep reinforcement learning
    Zhao P.
    Wu J.
    Wang Y.
    Zhang H.
    Dianli Zidonghua Shebei/Electric Power Automation Equipment, 2022, 42 (11): : 9 - 16
  • [4] Energy Optimization Management of Multi-microgrid using Deep Reinforcement Learning
    Zhang, Tingjun
    Yue, Dong
    Zhao, Nan
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4049 - 4053
  • [5] Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning
    Ji, Ying
    Wang, Jianhui
    Xu, Jiacan
    Fang, Xiaoke
    Zhang, Huaguang
    ENERGIES, 2019, 12 (12)
  • [6] Microgrid energy management using deep Q-network reinforcement learning
    Alabdullah, Mohammed H.
    Abido, Mohammad A.
    ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) : 9069 - 9078
  • [7] Research on System Economic Operation and Management Based on Deep Learning
    Wangtao
    Zheng, Zhenzhu
    Wang, Peiyuan
    Liu, Xiaobin
    SCIENTIFIC PROGRAMMING, 2022, 2022
  • [8] Optimising a Microgrid System by Deep Reinforcement Learning Techniques
    Dominguez-Barbero, David
    Garcia-Gonzalez, Javier
    Sanz-Bobi, Miguel A.
    Sanchez-Ubeda, Eugenio F.
    ENERGIES, 2020, 13 (11)
  • [9] Optimal Operation of a Microgrid with Hydrogen Storage Based on Deep Reinforcement Learning
    Zhu, Zhenshan
    Weng, Zhimin
    Zheng, Hailin
    ELECTRONICS, 2022, 11 (02)
  • [10] Deep reinforcement learning for energy management in a microgrid with flexible demand
    Nakabi, Taha Abdelhalim
    Toivanen, Pekka
    SUSTAINABLE ENERGY GRIDS & NETWORKS, 2021, 25