Greedy training algorithms for neural networks and applications to PDEs

被引:13
|
作者
Siegel, Jonathan W. [1 ]
Hong, Qingguo [1 ]
Jin, Xianlin [2 ]
Hao, Wenrui [1 ]
Xu, Jinchao [1 ]
机构
[1] Penn State Univ, Dept Math, University Pk, PA 16802 USA
[2] Peking Univ, Sch Math Sci, Beijing, Peoples R China
关键词
Neural networks; Partial differential equations; Greedy algorithms; Generalization accuracy; UNIVERSAL APPROXIMATION; CONVERGENCE-RATES; ERROR-BOUNDS;
D O I
10.1016/j.jcp.2023.112084
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Recently, neural networks have been widely applied for solving partial differential equations (PDEs). Although such methods have been proven remarkably successful on practical engineering problems, they have not been shown, theoretically or empirically, to converge to the underlying PDE solution with arbitrarily high accuracy. The primary difficulty lies in solving the highly non-convex optimization problems resulting from the neural network discretization, which are difficult to treat both theoretically and practically. It is our goal in this work to take a step toward remedying this. For this purpose, we develop a novel greedy training algorithm for shallow neural networks. Our method is applicable to both the variational formulation of the PDE and also to the residual minimization formulation pioneered by physics informed neural networks (PINNs). We analyze the method and obtain a priori error bounds when solving PDEs from the function class defined by shallow networks, which rigorously establishes the convergence of the method as the network size increases. Finally, we test the algorithm on several benchmark examples, including high dimensional PDEs, to confirm the theoretical convergence rate. Although the method is expensive relative to traditional approaches such as finite element methods, we view this work as a proof of concept for neural network-based methods, which shows that numerical methods based upon neural networks can be shown to rigorously converge.(c) 2023 Elsevier Inc. All rights reserved.
引用
收藏
页数:27
相关论文
共 50 条
  • [21] A survey of randomized algorithms for training neural networks
    Zhang, Le
    Suganthan, P. N.
    INFORMATION SCIENCES, 2016, 364 : 146 - 155
  • [22] A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks
    Custode, L. L.
    Tecce, C. L.
    Bakurov, I.
    Castelli, M.
    Della Cioppa, A.
    Vanneschi, L.
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2020, 2020, 12104 : 513 - 529
  • [23] Applications of neural networks in training science
    Pfeiffer, Mark
    Hohmann, Andreas
    HUMAN MOVEMENT SCIENCE, 2012, 31 (02) : 344 - 359
  • [24] Inversion of feedforward neural networks: Algorithms and applications
    Jensen, CA
    Reed, RD
    Marks, RJ
    El-Sharkawi, MA
    Jung, JB
    Miyamoto, RT
    Anderson, GM
    Eggen, CJ
    PROCEEDINGS OF THE IEEE, 1999, 87 (09) : 1536 - 1549
  • [25] On Jackknifed Greedy Algorithms and Their Applications in NMR
    Kasprzak P.
    Kazimierczuk K.
    Shchukina A.L.
    Shchukina, A.L. (a.shchukina@cent.uw.edu.pl), 1600, Pleiades journals (84): : 1335 - 1340
  • [26] Self-commissioning training algorithms for neural networks with applications to electric machine fault diagnostics
    Tallam, RM
    Habetler, TG
    Harley, RG
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2002, 17 (06) : 1089 - 1095
  • [27] TRAINING PRODUCT UNIT NEURAL NETWORKS WITH GENETIC ALGORITHMS
    JANSON, DJ
    FRENZEL, JF
    IEEE EXPERT-INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1993, 8 (05): : 26 - 33
  • [28] Adaptive training and pruning for neural networks: Algorithms and application
    Chen, Shu
    Chang, Sheng-Jiang
    Yuan, Jing-He
    Zhang, Yan-Xin
    Wong, K.W.
    2001, Science Press (50): : 680 - 681
  • [29] Separable recursive training algorithms for feedforward neural networks
    Asirvadam, VS
    McLoone, SF
    Irwin, GW
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 1212 - 1217
  • [30] Adaptive training and pruning for neural networks: algorithms and application
    Chen, S
    Chang, SJ
    Yuan, JH
    Zhang, YX
    Wong, KW
    ACTA PHYSICA SINICA, 2001, 50 (04) : 674 - 681