Data-Driven Control of COVID-19 in Buildings: A Reinforcement-Learning Approach

被引:3
|
作者
Hosseinloo, Ashkan Haji [1 ]
Nabi, Saleh [2 ]
Hosoi, Anette [3 ]
Dahleh, Munther A. [1 ]
机构
[1] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[2] Mitsubishi Elect Res Labs, Cambridge, MA 02139 USA
[3] MIT, Dept Mech Engn, Cambridge, MA 02139 USA
关键词
Disease control; reinforcement learning; data-driven control; HVAC system; AIRBORNE TRANSMISSION; VENTILATION;
D O I
10.1109/TASE.2023.3315549
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In addition to its public health crisis, COVID-19 pandemic has led to the shutdown and closure of workplaces with an estimated total cost of more than $16 trillion. Given the long hours an average person spends in buildings and indoor environments, this research article proposes data-driven control strategies to design optimal indoor airflow to minimize the exposure of occupants to viral pathogens in built environments. A general control framework is put forward for designing an optimal velocity field and proximal policy optimization, a reinforcement learning algorithm is employed to solve the control problem in a data-driven fashion. The same framework is used for optimal placement of disinfectants to neutralize the viral pathogens as an alternative to the airflow design when the latter is practically infeasible or hard to implement. We show, via computational simulations, that the control agent learns the optimal policy in both scenarios within a reasonable time. The proposed data-driven control framework in this study will have significant societal and economic benefits by setting the foundation for an improved methodology in designing case-specific infection control guidelines that can be realized by affordable ventilation devices and disinfectants.Note to Practitioners-This paper is motivated by the problem of COVID-19 infection spread in enclosed spaces but it also applies to other airborne pathogens. Airborne disease contagion often takes place in indoor environments; however, ventilation systems are almost never designed to take this into account so as to contain the spread of the pathogens. This is mainly because airflow design requires solving high-dimensional nonlinear partial differential equations known as Navier Stokes equations in fluid dynamics. In this paper, we propose a data-driven approach for solving the control problem of pathogen containment without solving the fluid dynamics equations. To this end, we first mathematically formulate the problem as an optimal control problem and then cast it as a reinforcement learning (RL) task. Reinforcement learning is the data-driven science of sequential decision-making and control in which the controller finds an optimal solution by systematic trial and error and without access to the system dynamics, i.e. fluid and pathogen dynamics in this paper. We employ an state-of-the-art RL algorithm, called PPO, to solve for optimal airflow in a room so as to minimize the exposure risk of occupants. Once it is calculated, the optimal airflow could be realized, via reverse engineering, by proper placement of the ventilation equipment, e.g. inlets, outlets, and fans. As an alternative to the airflow design, we use the same proposed data-driven techniques to find an optimal placement for pathogen disinfectants if there exists one, such as, hydrogen peroxide for COVID-19. Our results show the efficacy of our data-driven approach in designing an steady-state controller with full access to the system states. In future research, we will address the controller design with sparse measurements of the system states.
引用
收藏
页码:5691 / 5699
页数:9
相关论文
共 50 条
  • [31] COVID-19 Critical Illness: A Data-Driven Review
    Ginestra, Jennifer C.
    Mitchell, Oscar J. L.
    Anesi, George L.
    Christie, Jason D.
    ANNUAL REVIEW OF MEDICINE, 2022, 73 : 95 - 111
  • [32] Data-Driven Predictive Control of Buildings; A Regression Based Approach
    Khosravi, Mohammad
    Eichler, Annika
    Aboudonia, Ahmed
    Buck, Roger
    Smith, Roy S.
    2019 3RD IEEE CONFERENCE ON CONTROL TECHNOLOGY AND APPLICATIONS (IEEE CCTA 2019), 2019, : 777 - 782
  • [33] Safe Reinforcement Learning using Data-Driven Predictive Control
    Selim, Mahmoud
    Alanwar, Amr
    El-Kharashi, M. Watheq
    Abbas, Hazem M.
    Johansson, Karl H.
    2022 5TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, SIGNAL PROCESSING, AND THEIR APPLICATIONS (ICCSPA), 2022,
  • [34] Smart cities and a data-driven response to COVID-19
    James, Philip
    Das, Ronnie
    Jalosinska, Agata
    Smith, Luke
    DIALOGUES IN HUMAN GEOGRAPHY, 2020, 10 (02) : 255 - 259
  • [35] Identification and prediction of time-varying parameters of COVID-19 model: a data-driven deep learning approach
    Long, Jie
    Khaliq, A. Q. M.
    Furati, K. M.
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2021, 98 (08) : 1617 - 1632
  • [36] Data-Driven Prediction of COVID-19 Daily New Cases through a Hybrid Approach of Machine Learning Unsupervised and Deep Learning
    Manuel Ramirez-Alcocer, Ulises
    Tello-Leal, Edgar
    Macias-Hernandez, Barbara A.
    David Hernandez-Resendiz, Jaciel
    ATMOSPHERE, 2022, 13 (08)
  • [37] The Geography of the Covid-19 Pandemic: A Data-Driven Approach to Exploring Geographical Driving Forces
    Hass, Frederik Seeup
    Arsanjani, Jamal Jokar
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2021, 18 (06) : 1 - 19
  • [38] Assessing the dynamics and impact of COVID-19 vaccination on disease spread: A data-driven approach
    Waseel, Farhad
    Streftaris, George
    Rudrusamy, Bhuvendhraa
    Dass, Sarat C.
    INFECTIOUS DISEASE MODELLING, 2024, 9 (02) : 527 - 556
  • [39] Data-Driven Dynamic Multiobjective Optimal Control: An Aspiration-Satisfying Reinforcement Learning Approach
    Mazouchi, Majid
    Yang, Yongliang
    Modares, Hamidreza
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6183 - 6193
  • [40] Data-driven active corrective control in power systems: an interpretable deep reinforcement learning approach
    Li, Beibei
    Liu, Qian
    Hong, Yue
    He, Yuxiong
    Zhang, Lihong
    He, Zhihong
    Feng, Xiaoze
    Gao, Tianlu
    Yang, Li
    FRONTIERS IN ENERGY RESEARCH, 2024, 12