Model-free MIMO control tuning of a chiller process using reinforcement learning

被引:3
|
作者
Rosdahl, Christian [1 ]
Bernhardsson, B. O. [1 ]
Eisenhower, Bryan [2 ]
机构
[1] Lund Univ, Dept Automat Control, Lund, Sweden
[2] Carrier World Headquarters, Palm Beach Gardens, FL USA
关键词
BEHAVIOR;
D O I
10.1080/23744731.2023.2247938
中图分类号
O414.1 [热力学];
学科分类号
摘要
The performance of HVAC equipment, including chillers, is continuing to be pushed to theoretical limits, which impacts the necessity for advanced control logic to operate them efficiently and robustly. At the same time, their architectures are becoming more complex; many systems have multiple compressors, expansion devices, evaporators, circuits, or other elements that challenge control design and resulting performance. In order to maintain respectful controlled speed of response, stability, and robustness, controllers are becoming more complex, including the move from thermostatic control, to proportional integrator (PI), and to multiple-input multiple-output (MIMO) controllers. Model-based control design works well for their synthesis, while having accurate models for numerous product variants is unrealistic, often leading to very conservative designs. To address this, we propose and demonstrate a learning-based control tuner that supports the design of MIMO decoupling PI controllers using online information to adapt controller coefficients from an initial guess during commissioning or operation. The approach is tested on a physics-based model of a water-cooled screw chiller. The method is able to find a controller that performs better than a nominal controller (two single PI controllers) in terms of decreasing deviations from the operating point during disturbances while still following reference changes.
引用
收藏
页码:782 / 794
页数:13
相关论文
共 50 条
  • [21] On Distributed Model-Free Reinforcement Learning Control with Stability Guarantee
    Mukherjee, Sayak
    Thanh Long Vu
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 2175 - 2180
  • [22] Model-free Control for Stratospheric Airship Based on Reinforcement Learning
    Nie, Chunyu
    Zhu, Ming
    Zheng, Zewei
    Wu, Zhe
    PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE 2016, 2016, : 10702 - 10707
  • [23] Depth Control of Model-Free AUVs via Reinforcement Learning
    Wu, Hui
    Song, Shiji
    You, Keyou
    Wu, Cheng
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2019, 49 (12): : 2499 - 2510
  • [24] An Hybrid Model-Free Reinforcement Learning Approach for HVAC Control
    Solinas, Francesco M.
    Bellagarda, Andrea
    Macii, Enrico
    Patti, Edoardo
    Bottaccioli, Lorenzo
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON ENVIRONMENT AND ELECTRICAL ENGINEERING AND 2021 5TH IEEE INDUSTRIAL AND COMMERCIAL POWER SYSTEMS EUROPE (EEEIC/I&CPS EUROPE), 2021,
  • [25] Model-Free Control for Distributed Stream Data Processing using Deep Reinforcement Learning
    Li, Teng
    Xu, Zhiyuan
    Tang, Jian
    Wang, Yanzhi
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2018, 11 (06): : 705 - 718
  • [26] Model-free Control Design Using Policy Gradient Reinforcement Learning in LPV Framework
    Bao, Yajie
    Velni, Javad Mohammadpour
    2021 EUROPEAN CONTROL CONFERENCE (ECC), 2021, : 150 - 155
  • [27] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    Mitchell, B. A.
    Petzold, L. R.
    SCIENTIFIC REPORTS, 2018, 8
  • [28] Model-Free Control for Dynamic-Field Acoustic Manipulation Using Reinforcement Learning
    Latifi, Kourosh
    Kopitca, Artur
    Zhou, Quan
    IEEE ACCESS, 2020, 8 : 20597 - 20606
  • [29] Control of neural systems at multiple scales using model-free, deep reinforcement learning
    B. A. Mitchell
    L. R. Petzold
    Scientific Reports, 8
  • [30] Model-Free Attitude Control of Quadcopter using Disturbance Observer and Integral Reinforcement Learning
    Lee, Hanna
    Kim, Youdan
    AIAA SCITECH 2024 FORUM, 2024,