Model-free MIMO control tuning of a chiller process using reinforcement learning

被引:3
|
作者
Rosdahl, Christian [1 ]
Bernhardsson, B. O. [1 ]
Eisenhower, Bryan [2 ]
机构
[1] Lund Univ, Dept Automat Control, Lund, Sweden
[2] Carrier World Headquarters, Palm Beach Gardens, FL USA
关键词
BEHAVIOR;
D O I
10.1080/23744731.2023.2247938
中图分类号
O414.1 [热力学];
学科分类号
摘要
The performance of HVAC equipment, including chillers, is continuing to be pushed to theoretical limits, which impacts the necessity for advanced control logic to operate them efficiently and robustly. At the same time, their architectures are becoming more complex; many systems have multiple compressors, expansion devices, evaporators, circuits, or other elements that challenge control design and resulting performance. In order to maintain respectful controlled speed of response, stability, and robustness, controllers are becoming more complex, including the move from thermostatic control, to proportional integrator (PI), and to multiple-input multiple-output (MIMO) controllers. Model-based control design works well for their synthesis, while having accurate models for numerous product variants is unrealistic, often leading to very conservative designs. To address this, we propose and demonstrate a learning-based control tuner that supports the design of MIMO decoupling PI controllers using online information to adapt controller coefficients from an initial guess during commissioning or operation. The approach is tested on a physics-based model of a water-cooled screw chiller. The method is able to find a controller that performs better than a nominal controller (two single PI controllers) in terms of decreasing deviations from the operating point during disturbances while still following reference changes.
引用
收藏
页码:782 / 794
页数:13
相关论文
共 50 条
  • [1] Using Reinforcement Learning for Model-free Linear Quadratic Control with Process and Measurement Noises
    Yaghmaie, Farnaz Adib
    Gustafsson, Fredrik
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 6510 - 6517
  • [2] Model-free learning control of neutralization processes using reinforcement learning
    Syafiie, S.
    Tadeo, F.
    Martinez, E.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2007, 20 (06) : 767 - 782
  • [3] Linear Quadratic Control Using Model-Free Reinforcement Learning
    Yaghmaie, Farnaz Adib
    Gustafsson, Fredrik
    Ljung, Lennart
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (02) : 737 - 752
  • [4] Model-Free Quantum Control with Reinforcement Learning
    Sivak, V. V.
    Eickbusch, A.
    Liu, H.
    Royer, B.
    Tsioutsios, I
    Devoret, M. H.
    PHYSICAL REVIEW X, 2022, 12 (01)
  • [5] On the importance of hyperparameters tuning for model-free reinforcement learning algorithms
    Tejer, Mateusz
    Szezepanski, Rafal
    2024 12TH INTERNATIONAL CONFERENCE ON CONTROL, MECHATRONICS AND AUTOMATION, ICCMA, 2024, : 78 - 82
  • [6] Efficient model-free control of chiller plants via cluster-based deep reinforcement learning
    He, Kun
    Fu, Qiming
    Lu, You
    Ma, Jie
    Zheng, Yi
    Wang, Yunzhe
    Chen, Jianping
    JOURNAL OF BUILDING ENGINEERING, 2024, 82
  • [7] Model-free Predictive Optimal Iterative Learning Control using Reinforcement Learning
    Zhang, Yueqing
    Chu, Bing
    Shu, Zhan
    2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3279 - 3284
  • [8] Constrained model-free reinforcement learning for process optimization
    Pan, Elton
    Petsagkourakis, Panagiotis
    Mowbray, Max
    Zhang, Dongda
    del Rio-Chanona, Ehecatl Antonio
    COMPUTERS & CHEMICAL ENGINEERING, 2021, 154
  • [9] Model-Free Adaptive Control Approach Using Integral Reinforcement Learning
    Abouheaf, Mohammed
    Gueaieb, Wail
    2019 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2019), 2019, : 84 - 90
  • [10] Model-free LQ Control for Unmanned Helicopters using Reinforcement Learning
    Lee, Dong Jin
    Bang, Hyochoong
    2011 11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2011, : 117 - 120