Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost

被引:7
|
作者
Han, Gwangwoo [1 ]
Joo, Hong-Jin [1 ]
Lim, Hee-Won [2 ]
An, Young-Sub [1 ]
Lee, Wang-Je [1 ]
Lee, Kyoung-Ho [1 ]
机构
[1] Korea Inst Energy Res, Renewable Heat Integrat Lab, 152 Gajeong ro, Daejeon 34141, South Korea
[2] Daejeon Univ, Dept Architectural Engn, 62 Daehak ro, Daejeon 34520, South Korea
关键词
Deep reinforcement learning; Heat pump; Electricity cost; Rainbow deep Q network; Load demand; Renewable energy; MODEL-PREDICTIVE CONTROL; RULE-BASED CONTROL; DEMAND RESPONSE; ENERGY FLEXIBILITY; SYSTEMS; MANAGEMENT; PERFORMANCE; GENERATION; BUILDINGS;
D O I
10.1016/j.energy.2023.126913
中图分类号
O414.1 [热力学];
学科分类号
摘要
The need for reducing carbon emissions and achieving "Net Zero" energy has made improving heat pumps' (HPs) operational efficiency a crucial goal. However, current rule or model-based control strategies have limitations of inability to consider the entire heat production-storage-utilization cycle and inherent difficulties in achieving both high performance and generality. Here, we propose a model-free deep reinforcement learning (DRL)-based HP operation strategy that utilizes the Rainbow deep Q network algorithm to minimize electricity costs by considering thermal load demand, renewable generation, coefficient of performance (COP) of HPs, and state of charge (SOC) of thermal storage. We employ artificial neural networks to train for the regression of future load demands and COP, creating a data-driven and connectable environment with DRL. The Rainbow agent learns a creative strategy of limiting the maximum number of HP operations by increasing the SOC in advance to match future load demands. The performance of the Rainbow agent is evaluated against rule-based control in cases of future states, future uncertainty, and five-year long-term deployment. The proposed method reduces the year-round demand charge by 23.1% and the energy charge by 21.7%, resulting in a 22.2% reduction in the elec-tricity cost.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] A Data-Driven Multi-Agent Autonomous Voltage Control Framework Using Deep Reinforcement Learning
    Wang, Shengyi
    Duan, Jiajun
    Shi, Di
    Xu, Chunlei
    Li, Haifeng
    Diao, Ruisheng
    Wang, Zhiwei
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (06) : 4644 - 4654
  • [32] Data-driven sensitivity analysis and electricity consumption prediction for water source heat pump system using limited information
    Sun, Shaobo
    Chen, Huanxin
    BUILDING SIMULATION, 2021, 14 (04) : 1005 - 1016
  • [33] Data-driven sensitivity analysis and electricity consumption prediction for water source heat pump system using limited information
    Shaobo Sun
    Huanxin Chen
    Building Simulation, 2021, 14 : 1005 - 1016
  • [34] Data-Driven Passivity Analysis and Fault Detection Using Reinforcement Learning
    Ma, Haoran
    Zhao, Zhengen
    Li, Zhuyuan
    Yang, Ying
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2024,
  • [35] Cost reduction of heat pump assisted membrane distillation by using variable electricity prices
    Bindels, Martijn
    Nelemans, Bart
    DESALINATION, 2022, 530
  • [36] Data-driven optimal scheduling for integrated electricity-heat-gas-hydrogen energy system considering demand-side management: A deep reinforcement learning approach
    Liu, Jiejie
    Meng, Xianyang
    Wu, Jiangtao
    INTERNATIONAL JOURNAL OF HYDROGEN ENERGY, 2025, 103 : 147 - 165
  • [37] Effective data-driven precision medicine by cluster-applied deep reinforcement learning
    Oh, Sang Ho
    Lee, Su Jin
    Park, Jongyoul
    KNOWLEDGE-BASED SYSTEMS, 2022, 256
  • [38] Data-driven dynamic resource scheduling for network slicing: A Deep reinforcement learning approach
    Wang, Haozhe
    Wu, Yulei
    Min, Geyong
    Xu, Jie
    Tang, Pengcheng
    INFORMATION SCIENCES, 2019, 498 : 106 - 116
  • [39] DeepNap: Data-Driven Base Station Sleeping Operations Through Deep Reinforcement Learning
    Liu, Jingchu
    Krishnamachari, Bhaskar
    Zhou, Sheng
    Niu, Zhisheng
    IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (06): : 4273 - 4282
  • [40] DATA-DRIVEN MODEL-FREE ITERATIVE LEARNING CONTROL USING REINFORCEMENT LEARNING
    Song, Bing
    Phan, Minh Q.
    Longman, Richard W.
    ASTRODYNAMICS 2018, PTS I-IV, 2019, 167 : 2579 - 2597