Low-rank statistical finite elements for scalable model-data synthesis

被引:5
|
作者
Duffin, Connor [1 ,2 ,5 ]
Cripps, Edward [1 ]
Stemler, Thomas [1 ,3 ]
Girolami, Mark [2 ,4 ]
机构
[1] Univ Western Australia, Dept Math & Stat, Crawley, WA 6009, Australia
[2] Univ Cambridge, Dept Engn, Cambridge CB2 1PZ, England
[3] Univ Western Australia, Complex Syst Grp, Crawley, WA 6009, Australia
[4] Alan Turing Inst, Lloyds Register Programme Data Centr Engn, London NW1 2DB, England
[5] Dept Engn, Trumpington St, Cambridge CB2 1PZ, England
基金
澳大利亚研究理事会; 英国工程与自然科学研究理事会;
关键词
Bayesian filtering; Finite element methods; Reaction-diffusion; Bayesian inverse problems; CATASTROPHIC FILTER DIVERGENCE; DATA ASSIMILATION; GAUSSIAN-PROCESSES; CHEMICAL-SYSTEMS; INVERSE PROBLEMS; OSCILLATIONS;
D O I
10.1016/j.jcp.2022.111261
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Statistical learning additions to physically derived mathematical models are gaining traction in the literature. A recent approach has been to augment the underlying physics of the governing equations with data driven Bayesian statistical methodology. Coined statFEM, the method acknowledges a priori model misspecification, by embedding stochastic forcing within the governing equations. Upon receipt of additional data, the posterior distribution of the discretised finite element solution is updated using classical Bayesian filtering techniques. The resultant posterior jointly quantifies uncertainty associated with the ubiquitous problem of model misspecification and the data intended to represent the true process of interest. Despite this appeal, computational scalability is a challenge to statFEM's application to high-dimensional problems typically experienced in physical and industrial contexts. This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix, obtained from the leading order modes of the full-rank alternative. Demonstrated on a series of reaction-diffusion problems of increasing dimension, using experimental and simulated data, the method reconstructs the sparsely observed data-generating processes with minimal loss of information, in both the posterior mean and variance, paving the way for further integration of physical and probabilistic approaches to complex systems. (C) 2022 The Author(s). Published by Elsevier Inc.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Robust low-rank data matrix approximations
    XingDong Feng
    XuMing He
    Science China Mathematics, 2017, 60 : 189 - 200
  • [32] STRUCTURED LOW-RANK APPROXIMATION WITH MISSING DATA
    Markovsky, Ivan
    Usevich, Konstantin
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2013, 34 (02) : 814 - 830
  • [33] On the low-rank approximation of data on the unit sphere
    Chu, M
    Del Buono, N
    Lopez, L
    Politi, T
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2005, 27 (01) : 46 - 60
  • [34] Robust low-rank data matrix approximations
    Feng XingDong
    He XuMing
    SCIENCE CHINA-MATHEMATICS, 2017, 60 (02) : 189 - 200
  • [35] Matrix recovery with implicitly low-rank data
    Xie, Xingyu
    Wu, Jianlong
    Liu, Guangcan
    Wang, Jun
    NEUROCOMPUTING, 2019, 334 : 219 - 226
  • [36] Robust low-rank data matrix approximations
    FENG XingDong
    HE XuMing
    ScienceChina(Mathematics), 2017, 60 (02) : 189 - 200
  • [37] A New Representation for Data: Sparse and Low-Rank
    Sun, Jing
    Wu, Zongze
    Zeng, Deyu
    Ren, Zhigang
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 1477 - 1482
  • [38] Imputation of Streaming Low-Rank Tensor Data
    Mardani, Morteza
    Mateos, Gonzalo
    Giannakis, Georgios B.
    2014 IEEE 8TH SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP (SAM), 2014, : 433 - 436
  • [39] Low-Rank Time-Frequency Synthesis
    Fevotte, Cedric
    Kowalski, Matthieu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [40] Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
    Yaguchi, Atsushi
    Suzuki, Taiji
    Nitta, Shuhei
    Sakata, Yukinobu
    Tanizawa, Akiyuki
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3249 - 3256