Toward Stable, General Machine-Learned Models of the Atmospheric Chemical System

被引:26
|
作者
Kelp, Makoto M. [1 ]
Jacob, Daniel J. [1 ]
Kutz, J. Nathan [2 ]
Marshall, Julian D. [3 ]
Tessum, Christopher W. [4 ]
机构
[1] Harvard Univ, Dept Earth & Planetary Sci, 20 Oxford St, Cambridge, MA 02138 USA
[2] Univ Washington, Dept Appl Math, Seattle, WA 98195 USA
[3] Univ Washington, Dept Civil & Environm Engn, Seattle, WA 98195 USA
[4] Univ Illinois, Dept Civil & Environm Engn, Urbana, IL 61801 USA
关键词
machine learning; atmospheric chemical mechanism; model emulation; surrogate model; chemical mechanism; TROPOSPHERIC CHEMISTRY; NEURAL-NETWORKS; GAS; AIR; SENSITIVITY; IMPACTS; OZONE; AREA;
D O I
10.1029/2020JD032759
中图分类号
P4 [大气科学(气象学)];
学科分类号
0706 ; 070601 ;
摘要
Atmospheric chemistry models-components in models that simulate air pollution and climate change-are computationally expensive. Previous studies have shown that machine-learned atmospheric chemical solvers can be orders of magnitude faster than traditional integration methods but tend to suffer from numerical instability. Here, we present a modeling framework that reduces error accumulation compared to previous work while maintaining computational efficiency. Our approach is novel in that it (1) uses a recurrent training regime that results in extended (>1 week) simulations without exponential error accumulation and (2) can reversibly compress the number of modeled chemical species by >80% without further decreasing accuracy. We observe an similar to 260x speedup (similar to 1,900x with specialized hardware) compared to the traditional solver. We use random initial conditions in training to promote general applicability across a wide range of atmospheric conditions. For ozone (concentrations ranging from 0-70 ppb), our model predictions over a 24-hr simulation period match those of the reference solver with median error of 2.7 and <19 ppb error across 99% of simulations initialized with random noise. Error can be significantly higher in the remaining 1% of simulations, which include extreme concentration fluctuations simulated by the reference model. Results are similar for total particulate matter (median error of 16 and <32 mu g/m(3) across 99% of simulations with concentrations ranging from 0-150 mu g/m(3)). Finally, we discuss practical implications of our modeling framework and next steps for improvements. The machine learning models described here are not yet replacements for traditional chemistry solvers but represent a step toward that goal.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Toward Requirements Specification for Machine-Learned Components
    Rahimi, Mona
    Guo, Jin L. C.
    Kokaly, Sahar
    Chechik, Marsha
    [J]. 2019 IEEE 27TH INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS (REW 2019), 2019, : 241 - 244
  • [2] Machine-learned multi-system surrogate models for materials prediction
    Nyshadham, Chandramouli
    Rupp, Matthias
    Bekker, Brayden
    Shapeev, Alexander, V
    Mueller, Tim
    Rosenbrock, Conrad W.
    Csanyi, Gabor
    Wingate, David W.
    Hart, Gus L. W.
    [J]. NPJ COMPUTATIONAL MATERIALS, 2019, 5 (1)
  • [3] Machine-learned multi-system surrogate models for materials prediction
    Chandramouli Nyshadham
    Matthias Rupp
    Brayden Bekker
    Alexander V. Shapeev
    Tim Mueller
    Conrad W. Rosenbrock
    Gábor Csányi
    David W. Wingate
    Gus L. W. Hart
    [J]. npj Computational Materials, 5
  • [4] Machine-Learned Coarse-Grained Models
    Bejagam, Karteek K.
    Singh, Samrendra
    An, Yaxin
    Deshmukh, Sanket A.
    [J]. JOURNAL OF PHYSICAL CHEMISTRY LETTERS, 2018, 9 (16): : 4667 - 4672
  • [5] Bayesian uncertainty quantification for machine-learned models in physics
    Gal, Yarin
    Koumoutsakos, Petros
    Lanusse, Francois
    Louppe, Gilles
    Papadimitriou, Costas
    [J]. NATURE REVIEWS PHYSICS, 2022, 4 (09) : 573 - 577
  • [6] Bayesian uncertainty quantification for machine-learned models in physics
    Yarin Gal
    Petros Koumoutsakos
    Francois Lanusse
    Gilles Louppe
    Costas Papadimitriou
    [J]. Nature Reviews Physics, 2022, 4 : 573 - 577
  • [7] Problems in the deployment of machine-learned models in health care
    Cohen, Joseph Paul
    Cao, Tianshi
    Viviano, Joseph D.
    Huang, Chin-Wei
    Fralick, Michael
    Ghassemi, Marzyeh
    Mamdani, Muhammad
    Greiner, Russell
    Bengio, Yoshua
    [J]. CANADIAN MEDICAL ASSOCIATION JOURNAL, 2021, 193 (35) : E1391 - E1394
  • [8] Machine-learned Behaviour Models for a Distributed Behaviour Repository
    Jahl, Alexander
    Baraki, Harun
    Jakob, Stefan
    Fax, Malte
    Geihs, Kurt
    [J]. ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 1, 2022, : 188 - 199
  • [9] Machine-learned search for the stable structures of silicene on Ag(111)
    Hamamoto, Yuji
    Pham, Thanh Ngoc
    Bisbo, Malthe K.
    Hammer, Bjork
    Morikawa, Yoshitada
    [J]. PHYSICAL REVIEW MATERIALS, 2023, 7 (12):
  • [10] Machine-learned digital phase switch for sustainable chemical production
    Teng, Sin Yong
    Galvis, Leonardo
    Blanco, Carlos Mendez
    Ozkan, Leyla
    Barendse, Ruud
    Postma, Geert
    Jansen, Jeroen
    [J]. JOURNAL OF CLEANER PRODUCTION, 2023, 382