Learning high-dimensional parametric maps via reduced basis adaptive residual networks

被引:14
|
作者
O'Leary-Roseberry, Thomas [1 ]
Du, Xiaosong [2 ]
Chaudhuri, Anirban [1 ]
Martins, Joaquim R. R. A. [3 ]
Willcox, Karen [1 ,4 ]
Ghattas, Omar [1 ,5 ]
机构
[1] Univ Texas Austin, Oden Inst Computat Engn & Sci, 201 24th St, Austin, TX 78712 USA
[2] Missouri Univ Sci & Technol, Dept Mech & Aerosp Engn, 400 13th St, Rolla, MO 65409 USA
[3] Univ Michigan, Dept Aerosp Engn, 1320 Beal Ave, Ann Arbor, MI 48109 USA
[4] Univ Texas Austin, Dept Aerosp Engn & Engn Mech, 2617 Wichita St, C0600, Austin, TX 78712 USA
[5] Univ Texas Austin, Walker Dept Mech Engn, 204 Dean Keeton St, Austin, TX 78712 USA
关键词
Deep learning; Neural networks; Parametrized PDEs; Control flows; Residual networks; Adaptive surrogate construction; APPROXIMATION; FOUNDATIONS; REDUCTION; FRAMEWORK;
D O I
10.1016/j.cma.2022.115730
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs. When just few training data are available, it is beneficial to have a compact parametrization in order to ameliorate the ill-posedness of the neural network training problem. By linearly restricting high-dimensional maps to informed reduced bases of the inputs, one can compress high-dimensional maps in a constructive way that can be used to detect appropriate basis ranks, equipped with rigorous error estimates. A scalable neural network learning framework is thus to learn the nonlinear compressed reduced basis mapping. Unlike the reduced basis construction, however, neural network constructions are not guaranteed to reduce errors by adding representation power, making it difficult to achieve good practical performance. Inspired by recent approximation theory that connects ResNets to sequential minimizing flows, we present an adaptive ResNet construction algorithm. This algorithm allows for depth-wise enrichment of the neural network approximation, in a manner that can achieve good practical performance by first training a shallow network and then adapting. We prove universal approximation of the associated neural network class for L2 nu functions on compact sets. Our overall framework allows for constructive means to detect appropriate breadth and depth, and related compact parametrizations of neural networks, significantly reducing the need for architectural hyperparameter tuning. Numerical experiments for parametric PDE problems and a 3D CFD wing design optimization parametric map demonstrate that the proposed methodology can achieve remarkably high accuracy for limited training data, and outperformed other neural network strategies we compared against.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:29
相关论文
共 50 条
  • [31] Learning of Non-Parametric Control Policies with High-Dimensional State Features
    van Hoof, Herke
    Peters, Jan
    Neumann, Gerhard
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 38, 2015, 38 : 995 - 1003
  • [32] High-dimensional Apollonian networks
    Zhang, ZZ
    Comellas, F
    Fertin, G
    Rong, LL
    JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 2006, 39 (08): : 1811 - 1818
  • [33] A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods
    Chen, Peng
    Quarteroni, Alfio
    JOURNAL OF COMPUTATIONAL PHYSICS, 2015, 298 : 176 - 193
  • [34] Learning to operate a high-dimensional hand via a low-dimensional controller
    Portnova-Fahreeva, Alexandra A.
    Rizzoglio, Fabio
    Casadio, Maura
    Mussa-Ivaldi, Ferdinando A.
    Rombokas, Eric
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2023, 11
  • [35] An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization
    Yang, Qiang
    Chen, Wei-Neng
    Gu, Tianlong
    Jin, Hu
    Mao, Wentao
    Zhang, Jun
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (03) : 1960 - 1976
  • [36] Active learning for adaptive surrogate model improvement in high-dimensional problems
    Guo, Yulin
    Nath, Paromita
    Mahadevan, Sankaran
    Witherell, Paul
    STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2024, 67 (07)
  • [37] Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
    Liao, Qifeng
    Lin, Guang
    JOURNAL OF COMPUTATIONAL PHYSICS, 2016, 317 : 148 - 164
  • [38] Formulation and analysis of high-dimensional chaotic maps
    Liu, Y.
    Tang, Wallace K. S.
    Kwok, H. S.
    PROCEEDINGS OF 2008 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-10, 2008, : 772 - 775
  • [39] Melnikov vector function for high-dimensional maps
    Sun, J.H.
    Physics Letters, Section A: General, Atomic and Solid State Physics, 1996, 216 (1-5): : 47 - 52
  • [40] ERGODIC PROPERTIES OF HIGH-DIMENSIONAL SYMPLECTIC MAPS
    FALCIONI, M
    MARCONI, UMB
    VULPIANI, A
    PHYSICAL REVIEW A, 1991, 44 (04): : 2263 - 2270