Learning high-dimensional parametric maps via reduced basis adaptive residual networks

被引:14
|
作者
O'Leary-Roseberry, Thomas [1 ]
Du, Xiaosong [2 ]
Chaudhuri, Anirban [1 ]
Martins, Joaquim R. R. A. [3 ]
Willcox, Karen [1 ,4 ]
Ghattas, Omar [1 ,5 ]
机构
[1] Univ Texas Austin, Oden Inst Computat Engn & Sci, 201 24th St, Austin, TX 78712 USA
[2] Missouri Univ Sci & Technol, Dept Mech & Aerosp Engn, 400 13th St, Rolla, MO 65409 USA
[3] Univ Michigan, Dept Aerosp Engn, 1320 Beal Ave, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Dept Aerosp Engn & Engn Mech, 2617 Wichita St, C0600, Austin, TX 78712 USA
[5] Univ Texas Austin, Walker Dept Mech Engn, 204 Dean Keeton St, Austin, TX 78712 USA
关键词
Deep learning; Neural networks; Parametrized PDEs; Control flows; Residual networks; Adaptive surrogate construction; APPROXIMATION; FOUNDATIONS; REDUCTION; FRAMEWORK;
D O I
10.1016/j.cma.2022.115730
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs. When just few training data are available, it is beneficial to have a compact parametrization in order to ameliorate the ill-posedness of the neural network training problem. By linearly restricting high-dimensional maps to informed reduced bases of the inputs, one can compress high-dimensional maps in a constructive way that can be used to detect appropriate basis ranks, equipped with rigorous error estimates. A scalable neural network learning framework is thus to learn the nonlinear compressed reduced basis mapping. Unlike the reduced basis construction, however, neural network constructions are not guaranteed to reduce errors by adding representation power, making it difficult to achieve good practical performance. Inspired by recent approximation theory that connects ResNets to sequential minimizing flows, we present an adaptive ResNet construction algorithm. This algorithm allows for depth-wise enrichment of the neural network approximation, in a manner that can achieve good practical performance by first training a shallow network and then adapting. We prove universal approximation of the associated neural network class for L2 nu functions on compact sets. Our overall framework allows for constructive means to detect appropriate breadth and depth, and related compact parametrizations of neural networks, significantly reducing the need for architectural hyperparameter tuning. Numerical experiments for parametric PDE problems and a 3D CFD wing design optimization parametric map demonstrate that the proposed methodology can achieve remarkably high accuracy for limited training data, and outperformed other neural network strategies we compared against.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:29
相关论文
共 50 条
  • [1] High-dimensional variable selection via low-dimensional adaptive learning
    Staerk, Christian
    Kateri, Maria
    Ntzoufras, Ioannis
    ELECTRONIC JOURNAL OF STATISTICS, 2021, 15 (01): : 830 - 879
  • [2] Derivative-informed projected neural networks for high-dimensional parametric maps governed by PDEs
    O'Leary-Roseberry, Thomas
    Villa, Umberto
    Chen, Peng
    Ghattas, Omar
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2022, 388
  • [3] Synchronization and parameter identification of high-dimensional discrete chaotic systems via parametric adaptive control
    Yang, Y
    Ma, XK
    Zhang, H
    CHAOS SOLITONS & FRACTALS, 2006, 28 (01) : 244 - 251
  • [4] High-dimensional learning of linear causal networks via inverse covariance estimation
    Loh, Po-Ling
    Büuhlmann, Peter
    Journal of Machine Learning Research, 2015, 15 : 3065 - 3105
  • [5] High-dimensional learning of narrow neural networks
    Cui, Hugo
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2025, 2025 (02):
  • [6] High-Dimensional Learning of Linear Causal Networks via Inverse Covariance Estimation
    Loh, Po-Ling
    Buehlmann, Peter
    JOURNAL OF MACHINE LEARNING RESEARCH, 2014, 15 : 3065 - 3105
  • [7] High-dimensional learning framework for adaptive document filtering
    Lam, W
    Yu, KL
    COMPUTATIONAL INTELLIGENCE, 2003, 19 (01) : 42 - 63
  • [8] High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs
    Abdellah Chkifa
    Albert Cohen
    Christoph Schwab
    Foundations of Computational Mathematics, 2014, 14 : 601 - 633
  • [9] High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs
    Chkifa, Abdellah
    Cohen, Albert
    Schwab, Christoph
    FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2014, 14 (04) : 601 - 633
  • [10] Certified reduced basis method in geosciencesAddressing the challenge of high-dimensional problems
    Denise Degen
    Karen Veroy
    Florian Wellmann
    Computational Geosciences, 2020, 24 : 241 - 259