Deep reinforcement learning-based joint load scheduling for household multi-energy system

被引:17
|
作者
Zhao, Liyuan [1 ,2 ]
Yang, Ting [2 ]
Li, Wei [3 ]
Zomaya, Albert Y. [3 ]
机构
[1] Hebei Univ Technol, State Key Lab Reliabil & Intelligence Elect Equipm, Tianjin 300401, Peoples R China
[2] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[3] Univ Sydney, Sch Comp Sci, Camperdown, NSW 2006, Australia
基金
中国国家自然科学基金;
关键词
Household multi-energy system; Joint load scheduling; Deep reinforcement learning; Energy management; ENERGY MANAGEMENT-SYSTEM; DEMAND RESPONSE; SMART HOME; OPTIMIZATION; APPLIANCES; HEAT; STRATEGIES; BENEFITS;
D O I
10.1016/j.apenergy.2022.119346
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Under the background of the popularization of renewable energy sources and gas-fired domestic devices in households, this paper proposes a joint load scheduling strategy for household multi-energy system (HMES) aiming at minimizing residents' energy cost while maintaining the thermal comfort. Specifically, the studied HMES contains photovoltaic, gas-electric hybrid heating system, gas-electric kitchen stove and various types of conventional loads. Yet, it is challenging to develop an efficient energy scheduling strategy due to the un-certainties in energy price, photovoltaic generation, outdoor temperature, and residents' hot water demand. To tackle this problem, we formulate the HMES scheduling problem as a Markov decision process with both continuous and discrete actions and propose a deep reinforcement learning-based HMES scheduling approach. A mixed distribution is used to approximate the scheduling strategies of different types of household devices, and proximal policy optimization is used to optimize the scheduling strategies without requiring any prediction information or distribution knowledge of system uncertainties. The proposed approach can handle continuous actions of power-shiftable devices and discrete actions of time-shiftable devices simultaneously, as well as the optimal management of electrical devices and gas-fired devices, so as to jointly optimize the operation of all household loads. The proposed approach is compared with a deep Q network (DQN)-based approach and a model predictive control (MPC)-based approach. Comparison results show that the average energy cost of the proposed approach is reduced by 12.17% compared to the DQN-based approach and 4.59% compared to the MPC-based approach.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Reinforcement learning-based scheduling strategy for energy storage in microgrid
    Zhou, Kunshu
    Zhou, Kaile
    Yang, Shanlin
    [J]. JOURNAL OF ENERGY STORAGE, 2022, 51
  • [42] Deep Reinforcement Learning-Based Joint Sequence Scheduling and Trajectory Planning in Wireless Rechargeable Sensor Networks
    Jiang, Chengpeng
    Chen, Wencong
    Wang, Ziyang
    Xiao, Wendong
    [J]. IEEE SENSORS JOURNAL, 2024, 24 (08) : 13699 - 13711
  • [43] Joint Energy and Carbon Trading for Multi-Microgrid System Based on Multi-Agent Deep Reinforcement Learning
    Zhou Y.
    Ma Z.
    Wang T.
    Zhang J.
    Shi X.
    Zou S.
    [J]. IEEE Transactions on Power Systems, 2024, 39 (06) : 1 - 13
  • [44] Joint Energy Trading and Scheduling for Multi-Energy microgrids with Storage
    Zhu, Dafeng
    Yang, Bo
    Liu, Qi
    Ma, Kai
    Zhu, Shanying
    Guan, Xinping
    [J]. PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 1617 - 1622
  • [45] DRJLRA: A Deep Reinforcement Learning-Based Joint Load and Resource Allocation in Heterogeneous Coded Distributed Computing
    Heidarpour, Ali Reza
    Ardakani, Maryam Haghighi
    Ardakani, Masoud
    Tellambura, Chintha
    [J]. 2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [46] Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing
    Sheng, Shuran
    Chen, Peng
    Chen, Zhimin
    Wu, Lenan
    Yao, Yuxuan
    [J]. SENSORS, 2021, 21 (05) : 1 - 19
  • [47] A novel deep reinforcement learning-based algorithm for multi-objective energy-efficient flow-shop scheduling
    Liang, Peng
    Xiao, Pengfei
    Li, Zeya
    Luo, Min
    Zhang, Chaoyong
    [J]. IET Collaborative Intelligent Manufacturing, 2024, 6 (04)
  • [48] Deep Reinforcement Learning-Based Task Scheduling in Heterogeneous MEC Networks
    Shang, Ying
    Li, Jinglei
    Qin, Meng
    Yang, Qinghai
    [J]. 2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [49] Deep Reinforcement Learning-Based Job Shop Scheduling of Smart Manufacturing
    Elsayed, Eman K.
    Elsayed, Asmaa K.
    Eldahshan, Kamal A.
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (03): : 5103 - 5120
  • [50] Deep reinforcement learning-based optimization strategy for the cooperative scheduling of harvesters
    Li, Zikang
    Zhang, Fan
    Teng, Guifa
    Li, Zheng
    Wang, Ziyi
    Ma, Shiji
    [J]. Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering, 40 (14): : 23 - 32