Application of deep Q-networks for model-free optimal control balancing between different HVAC systems

被引:58
|
作者
Ahn, Ki Uhn [1 ]
Park, Cheol Soo [2 ]
机构
[1] Seoul Natl Univ, Inst Engn Res, Seoul, South Korea
[2] Seoul Natl Univ, Coll Engn, Inst Engn Res, Dept Architecture & Architectural Engn,Inst Const, 1 Gwanak Ro, Seoul 08826, South Korea
关键词
BUILDING ENERGY; PART; OPTIMIZATION; STRATEGIES; VENTILATION; PREDICTION;
D O I
10.1080/23744731.2019.1680234
中图分类号
O414.1 [热力学];
学科分类号
摘要
A deep Q-network (DQN) was applied for model-free optimal control balancing between different HVAC systems. The DQN was coupled to a reference office building: an EnergyPlus simulation model provided by the U.S. Department of Energy. The building was air-conditioned with four air-handling units (AHUs), two electric chillers, a cooling tower, and two pumps. EnergyPlus simulation results for eleven days (July 1-11) and three subsequent days (July 12-14) were used to improve the DQN policy and test the optimal control. The optimization goal was to minimize the building's energy use while maintaining the indoor CO2 concentration below 1,000 ppm. It was revealed that the DQN-a reinforcement learning method-can improve its control policy based on prior actions, states, and rewards. The DQN lowered the total energy usage by 15.7% in comparison with the baseline operation while maintaining the indoor CO2 concentration below 1,000 ppm. Compared to model predictive control, the DQN does not require a simulation model, or a predetermined prediction horizon, thus delivering model-free optimal control. Furthermore, it was demonstrated that the DQN can find balanced control actions between different energy consumers in the building, such as chillers, pumps, and AHUs.
引用
收藏
页码:61 / 74
页数:14
相关论文
共 50 条
  • [31] Model-Free Optimal Control of VAR Resources in Distribution Systems: An Extremum Seeking Approach
    Arnold, Daniel B.
    Negrete-Pincetic, Matias
    Sankur, Michael D.
    Auslander, David M.
    Callaway, Duncan S.
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2016, 31 (05) : 3583 - 3593
  • [32] Model-Free Optimal Control of Linear Multiagent Systems via Decomposition and Hierarchical Approximation
    Jing, Gangshan
    Bai, He
    George, Jemin
    Chakrabortty, Aranya
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2021, 8 (03): : 1069 - 1081
  • [33] Data-Driven Optimal Model-Free Control of Twin Rotor Aerodynamic Systems
    Roman, Raul-Cristian
    Radac, Mircea-Bogdan
    Precup, Radu-Emil
    Petriu, Emil M.
    2015 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2015, : 161 - 166
  • [34] Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems
    Xu, Zhenhui
    Shen, Tielong
    Cheng, Daizhan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1520 - 1534
  • [35] Model-free optimal control of discrete-time systems with additive and multiplicative noises
    Lai, Jing
    Xiong, Junlin
    Shu, Zhan
    AUTOMATICA, 2023, 147
  • [36] Novel Model-free Optimal Active Vibration Control Strategy Based on Deep Reinforcement Learning
    Zhang, Yi-Ang
    Zhu, Songye
    STRUCTURAL CONTROL & HEALTH MONITORING, 2023, 2023
  • [37] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    Mitchell, B. A.
    Petzold, L. R.
    SCIENTIFIC REPORTS, 2018, 8
  • [38] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    B. A. Mitchell
    L. R. Petzold
    Scientific Reports, 8
  • [39] A model-free deep integral policy iteration structure for robust control of uncertain systems
    Wang, Ding
    Liu, Ao
    Qiao, Junfei
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2024, 55 (08) : 1571 - 1583
  • [40] Different-factor compact-form model-free adaptive control with neural networks for MIMO nonlinear systems
    Chen, Chen
    Lu, Jiangang
    ASIAN JOURNAL OF CONTROL, 2022, 24 (04) : 1688 - 1699