Continuous Control of Complex Chemical Reaction Network with Reinforcement Learning

被引:3
|
作者
Alhazmi, Khalid [1 ]
Sarathy, S. Mani [1 ]
机构
[1] King Abdullah Univ Sci & Technol KAUST, Thuwal, Saudi Arabia
关键词
D O I
10.23919/ecc51009.2020.9143688
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The goal of process control is to maintain a process at the desired operating conditions. Disturbances, measurement uncertainties, and high-order dynamics in complex and highly integrated chemical processes pose a challenging control problem. Even though advanced process controllers, such as Model Predictive Control (MPC), have been successfully implemented to solve hard control problems, they are difficult to develop, rely on a process model, and require high performance computers and continuous maintenance. Reinforcement learning presents an appealing option for such complex systems, but little work has been done to apply reinforcement learning in chemical reactions with practical significance, to discuss the structure of the RL agent, and to evaluate the performance against benchmark measures. This work (1) applies a state-of-the-art reinforcement learning algorithm (DDPG) to a network of reactions with challenging dynamics and practical significance. (2) Disturbances and measurement uncertainties have been simulated. In addition, (3) we defined an observation space that is based on the working concept of a PID controller, optimized the reward function to achieve the desired controller performance, and evaluated the performance of the RL controller in terms of setpoint tracking, disturbance rejection, and robustness to parameter uncertainties.
引用
收藏
页码:1066 / 1068
页数:3
相关论文
共 50 条
  • [1] Reinforcement Learning for Improving Chemical Reaction Performance
    Hoque, Ajnabiul
    Surve, Mihir
    Kalyanakrishnan, Shivaram
    Sunoj, Raghavan B.
    [J]. Journal of the American Chemical Society, 2024,
  • [2] Continuous Control with a Combination of Supervised and Reinforcement Learning
    Kangin, Dmitry
    Pugeault, Nicolas
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018, : 163 - 170
  • [3] Reinforcement learning for continuous stochastic control problems
    Munos, R
    Bourgine, P
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 10, 1998, 10 : 1029 - 1035
  • [4] Competitive reinforcement learning in continuous control tasks
    Abramson, M
    Pachowicz, P
    Wechsler, H
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 1909 - 1914
  • [5] Benchmarking Deep Reinforcement Learning for Continuous Control
    Duan, Yan
    Chen, Xi
    Houthooft, Rein
    Schulman, John
    Abbeel, Pieter
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [6] Brain topology improved spiking neural network for efficient reinforcement learning of continuous control
    Wang, Yongjian
    Wang, Yansong
    Zhang, Xinhe
    Du, Jiulin
    Zhang, Tielin
    Xu, Bo
    [J]. FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [7] Applying neural network to reinforcement learning in continuous spaces
    Wang, DL
    Gao, Y
    Yang, P
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 621 - 626
  • [8] Learning Continuous Control Actions for Robotic Grasping with Reinforcement Learning
    Shahid, Asad Ali
    Roveda, Loris
    Piga, Dario
    Braghin, Francesco
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 4066 - 4072
  • [9] Autonomous Surface Craft Continuous Control with Reinforcement Learning
    Andrey, Sorokin
    Ogli, Farkhadov Mais Pasha
    [J]. 2021 IEEE 15TH INTERNATIONAL CONFERENCE ON APPLICATION OF INFORMATION AND COMMUNICATION TECHNOLOGIES (AICT2021), 2021,
  • [10] A Tour of Reinforcement Learning: The View from Continuous Control
    Recht, Benjamin
    [J]. ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS, VOL 2, 2019, 2 : 253 - 279