Reduced order modeling for flow and transport problems with Barlow Twins self-supervised learning

被引:0
|
作者
Teeratorn Kadeethum
Francesco Ballarin
Daniel O’Malley
Youngsoo Choi
Nikolaos Bouklas
Hongkyu Yoon
机构
[1] Sandia National Laboratories,
[2] Cornell University,undefined
[3] Catholic University of the Sacred Heart,undefined
[4] Los Alamos National Laboratory,undefined
[5] Lawrence Livermore National Laboratory,undefined
来源
Scientific Reports | / 12卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches. Deep learning ROM (DL-ROM) using deep-convolutional autoencoders (DC–AE) has been shown to capture nonlinear solution manifolds but fails to perform adequately when linear subspace approaches such as proper orthogonal decomposition (POD) would be optimal. Besides, most DL-ROM models rely on convolutional layers, which might limit its application to only a structured mesh. The proposed framework in this study relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning, where BT maximizes the information content of the embedding with the latent space through a joint embedding architecture. Through a series of benchmark problems of natural convection in porous media, BT–AE performs better than the previous DL-ROM framework by providing comparable results to POD-based approaches for problems where the solution lies within a linear subspace as well as DL-ROM autoencoder-based techniques where the solution lies on a nonlinear manifold; consequently, bridges the gap between linear and nonlinear reduced manifolds. We illustrate that a proficient construction of the latent space is key to achieving these results, enabling us to map these latent spaces using regression models. The proposed framework achieves a relative error of 2% on average and 12% in the worst-case scenario (i.e., the training data is small, but the parameter space is large.). We also show that our framework provides a speed-up of 7×106\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7 \times 10^{6}$$\end{document} times, in the best case, and 7×103\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7 \times 10^{3}$$\end{document} times on average compared to a finite element solver. Furthermore, this BT–AE framework can operate on unstructured meshes, which provides flexibility in its application to standard numerical solvers, on-site measurements, experimental data, or a combination of these sources.
引用
收藏
相关论文
共 50 条
  • [31] Traffic Prediction with Self-Supervised Learning: A Heterogeneity-Aware Model for Urban Traffic Flow Prediction Based on Self-Supervised Learning
    Gao, Min
    Wei, Yingmei
    Xie, Yuxiang
    Zhang, Yitong
    MATHEMATICS, 2024, 12 (09)
  • [32] Self-supervised learning based on Transformer for flow reconstruction and prediction
    Xu, Bonan
    Zhou, Yuanye
    Bian, Xin
    PHYSICS OF FLUIDS, 2024, 36 (02)
  • [33] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [34] Self-supervised Spatiotemporal Learning via Video Clip Order Prediction
    Xu, Dejing
    Xiao, Jun
    Zhao, Zhou
    Shao, Jian
    Xie, Di
    Zhuang, Yueting
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10326 - 10335
  • [35] A deep learning based reduced order modeling for stochastic underground flow problems
    Wang, Yiran
    Chung, Eric
    Fu, Shubin
    JOURNAL OF COMPUTATIONAL PHYSICS, 2022, 467
  • [36] Self-supervised learning of spatiotemporal thermal signatures in additive manufacturing using reduced order physics models and transformers
    Fernandez-Zelaia, Patxi
    Dryepondt, Sebastien N.
    Ziabari, Amir Koushyar
    Kirka, Michael M.
    COMPUTATIONAL MATERIALS SCIENCE, 2024, 232
  • [37] Self-Supervised Neural Topic Modeling
    Bahrainian, Seyed Ali
    Jaggi, Martin
    Eickhoff, Carsten
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3341 - 3350
  • [38] Modeling Disease Progression in Retinal OCTs with Longitudinal Self-supervised Learning
    Rivail, Antoine
    Schmidt-Erfurth, Ursula
    Vogl, Wolf-Dieter
    Waldstein, Sebastian M.
    Riedl, Sophie
    Grechenig, Christoph
    Wu, Zhichao
    Bogunovic, Hrvoje
    PREDICTIVE INTELLIGENCE IN MEDICINE (PRIME 2019), 2019, 11843 : 44 - 52
  • [39] Self-Supervised Adversarial Variational Learning
    Ye, Fei
    Bors, Adrian. G.
    PATTERN RECOGNITION, 2024, 148
  • [40] Self-supervised Learning for Spinal MRIs
    Jamaludin, Amir
    Kadir, Timor
    Zisserman, Andrew
    DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2017, 10553 : 294 - 302