Reduced order modeling for flow and transport problems with Barlow Twins self-supervised learning

被引:0
|
作者
Teeratorn Kadeethum
Francesco Ballarin
Daniel O’Malley
Youngsoo Choi
Nikolaos Bouklas
Hongkyu Yoon
机构
[1] Sandia National Laboratories,
[2] Cornell University,undefined
[3] Catholic University of the Sacred Heart,undefined
[4] Los Alamos National Laboratory,undefined
[5] Lawrence Livermore National Laboratory,undefined
来源
Scientific Reports | / 12卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches. Deep learning ROM (DL-ROM) using deep-convolutional autoencoders (DC–AE) has been shown to capture nonlinear solution manifolds but fails to perform adequately when linear subspace approaches such as proper orthogonal decomposition (POD) would be optimal. Besides, most DL-ROM models rely on convolutional layers, which might limit its application to only a structured mesh. The proposed framework in this study relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning, where BT maximizes the information content of the embedding with the latent space through a joint embedding architecture. Through a series of benchmark problems of natural convection in porous media, BT–AE performs better than the previous DL-ROM framework by providing comparable results to POD-based approaches for problems where the solution lies within a linear subspace as well as DL-ROM autoencoder-based techniques where the solution lies on a nonlinear manifold; consequently, bridges the gap between linear and nonlinear reduced manifolds. We illustrate that a proficient construction of the latent space is key to achieving these results, enabling us to map these latent spaces using regression models. The proposed framework achieves a relative error of 2% on average and 12% in the worst-case scenario (i.e., the training data is small, but the parameter space is large.). We also show that our framework provides a speed-up of 7×106\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7 \times 10^{6}$$\end{document} times, in the best case, and 7×103\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7 \times 10^{3}$$\end{document} times on average compared to a finite element solver. Furthermore, this BT–AE framework can operate on unstructured meshes, which provides flexibility in its application to standard numerical solvers, on-site measurements, experimental data, or a combination of these sources.
引用
收藏
相关论文
共 50 条
  • [41] Self-Supervised Learning of Robot Manipulation
    Tommy, Robin
    Krishnan, Athira R.
    2020 4TH INTERNATIONAL CONFERENCE ON AUTOMATION, CONTROL AND ROBOTS (ICACR 2020), 2020, : 22 - 25
  • [42] Cross Pixel Optical-Flow Similarity for Self-supervised Learning
    Mahendran, Aravindh
    Thewlis, James
    Vedaldi, Andrea
    COMPUTER VISION - ACCV 2018, PT V, 2019, 11365 : 99 - 116
  • [43] Spatio-Temporal Self-Supervised Learning for Traffic Flow Prediction
    Ji, Jiahao
    Wang, Jingyuan
    Huang, Chao
    Wu, Junjie
    Xu, Boren
    Wu, Zhenhe
    Zhang, Junbo
    Zheng, Yu
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4356 - 4364
  • [44] Self-supervised Learning: A Succinct Review
    Rani, Veenu
    Nabi, Syed Tufael
    Kumar, Munish
    Mittal, Ajay
    Kumar, Krishan
    ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2023, 30 (04) : 2761 - 2775
  • [45] Self-Supervised Learning for Recommender System
    Huang, Chao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 3440 - 3443
  • [46] Audio self-supervised learning: A survey
    Liu, Shuo
    Mallol-Ragolta, Adria
    Parada-Cabaleiro, Emilia
    Qian, Kun
    Jing, Xin
    Kathan, Alexander
    Hu, Bin
    Schuller, Bjorn W.
    PATTERNS, 2022, 3 (12):
  • [47] MarioNette: Self-Supervised Sprite Learning
    Smirnov, Dmitriy
    Gharbi, Michael
    Fisher, Matthew
    Guizilini, Vitor
    Efros, Alexei A.
    Solomon, Justin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [48] Self-supervised Learning: A Succinct Review
    Veenu Rani
    Syed Tufael Nabi
    Munish Kumar
    Ajay Mittal
    Krishan Kumar
    Archives of Computational Methods in Engineering, 2023, 30 : 2761 - 2775
  • [49] Self-supervised learning for outlier detection
    Diers, Jan
    Pigorsch, Christian
    STAT, 2021, 10 (01):
  • [50] Self-Supervised Learning for Multimedia Recommendation
    Tao, Zhulin
    Liu, Xiaohao
    Xia, Yewei
    Wang, Xiang
    Yang, Lifang
    Huang, Xianglin
    Chua, Tat-Seng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5107 - 5116