Deep reinforcement learning-based joint load scheduling for household multi-energy system

被引:17
|
作者
Zhao, Liyuan [1 ,2 ]
Yang, Ting [2 ]
Li, Wei [3 ]
Zomaya, Albert Y. [3 ]
机构
[1] Hebei Univ Technol, State Key Lab Reliabil & Intelligence Elect Equipm, Tianjin 300401, Peoples R China
[2] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[3] Univ Sydney, Sch Comp Sci, Camperdown, NSW 2006, Australia
基金
中国国家自然科学基金;
关键词
Household multi-energy system; Joint load scheduling; Deep reinforcement learning; Energy management; ENERGY MANAGEMENT-SYSTEM; DEMAND RESPONSE; SMART HOME; OPTIMIZATION; APPLIANCES; HEAT; STRATEGIES; BENEFITS;
D O I
10.1016/j.apenergy.2022.119346
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Under the background of the popularization of renewable energy sources and gas-fired domestic devices in households, this paper proposes a joint load scheduling strategy for household multi-energy system (HMES) aiming at minimizing residents' energy cost while maintaining the thermal comfort. Specifically, the studied HMES contains photovoltaic, gas-electric hybrid heating system, gas-electric kitchen stove and various types of conventional loads. Yet, it is challenging to develop an efficient energy scheduling strategy due to the un-certainties in energy price, photovoltaic generation, outdoor temperature, and residents' hot water demand. To tackle this problem, we formulate the HMES scheduling problem as a Markov decision process with both continuous and discrete actions and propose a deep reinforcement learning-based HMES scheduling approach. A mixed distribution is used to approximate the scheduling strategies of different types of household devices, and proximal policy optimization is used to optimize the scheduling strategies without requiring any prediction information or distribution knowledge of system uncertainties. The proposed approach can handle continuous actions of power-shiftable devices and discrete actions of time-shiftable devices simultaneously, as well as the optimal management of electrical devices and gas-fired devices, so as to jointly optimize the operation of all household loads. The proposed approach is compared with a deep Q network (DQN)-based approach and a model predictive control (MPC)-based approach. Comparison results show that the average energy cost of the proposed approach is reduced by 12.17% compared to the DQN-based approach and 4.59% compared to the MPC-based approach.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Multi-Energy Scheduling of an Industrial Integrated Energy System by Reinforcement Learning-Based Differential Evolution
    Xu, Zhengwei
    Han, Guangjie
    Liu, Li
    Martinez-Garcia, Miguel
    Wang, Zhijian
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2021, 5 (03): : 1077 - 1090
  • [2] Energy Scheduling for Multi-Energy Systems via Deep Reinforcement Learning
    Wang, Zixin
    Zhu, Shanying
    Ding, Tao
    Yang, Bo
    [J]. 2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
  • [3] Integrated Demand Response in Multi-Energy Microgrids: A Deep Reinforcement Learning-Based Approach
    Xu, Chenhui
    Huang, Yunkai
    [J]. ENERGIES, 2023, 16 (12)
  • [4] Online EVs Vehicle-to-Grid Scheduling Coordinated with Multi-Energy Microgrids: A Deep Reinforcement Learning-Based Approach
    Pan, Weiqi
    Yu, Xiaorong
    Guo, Zishan
    Qian, Tao
    Li, Yang
    [J]. ENERGIES, 2024, 17 (11)
  • [5] A Bayesian Deep Reinforcement Learning-Based Resilient Control for Multi-Energy Micro-Gird
    Zhang, Tingqi
    Sun, Mingyang
    Qiu, Dawei
    Zhang, Xi
    Strbac, Goran
    Kang, Chongqing
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (06) : 5057 - 5072
  • [6] Reinforcement learning-based scheduling of multi-battery energy storage system
    Cheng, Guangran
    Dong, Lu
    Yuan, Xin
    Sun, Changyin
    [J]. JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2023, 34 (01) : 117 - 128
  • [7] Reinforcement learning-based scheduling of multi-battery energy storage system
    CHENG Guangran
    DONG Lu
    YUAN Xin
    SUN Changyin
    [J]. Journal of Systems Engineering and Electronics, 2023, 34 (01) : 117 - 128
  • [8] Energy Management and Optimization of Multi-energy Grid Based on Deep Reinforcement Learning
    Liu, Junfeng
    Chen, Jianlong
    Wang, Xiaosheng
    Zeng, Jun
    Huang, Qianying
    [J]. Dianwang Jishu/Power System Technology, 2020, 44 (10): : 3794 - 3803
  • [9] A novel forecasting based scheduling method for household energy management system based on deep reinforcement learning
    Ren, Mifeng
    Liu, Xiangfei
    Yang, Zhile
    Zhang, Jianhua
    Guo, Yuanjun
    Jia, Yanbing
    [J]. SUSTAINABLE CITIES AND SOCIETY, 2022, 76
  • [10] DEEP REINFORCEMENT LEARNING-BASED IRRIGATION SCHEDULING
    Yang, Y.
    Hu, J.
    Porter, D.
    Marek, T.
    Heflin, K.
    Kong, H.
    Sun, L.
    [J]. TRANSACTIONS OF THE ASABE, 2020, 63 (03) : 549 - 556