Local randomized neural networks with discontinuous Galerkin methods for partial differential equations

被引:3
|
作者
Sun, Jingbo [1 ]
Dong, Suchuan [2 ]
Wang, Fei [1 ]
机构
[1] Xian Jiaotong Univ Xian, Sch Math & Stat, Xian 710049, Shaanxi, Peoples R China
[2] Purdue Univ, Ctr Computat & Appl Math, Dept Math, W Lafayette, IN 47907 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Randomized neural networks; Discontinuous Galerkin method; Partial differential equations; Least-squares method; Space-time approach; EXTREME LEARNING-MACHINE; DEEP RITZ METHOD; APPROXIMATION; ALGORITHM;
D O I
10.1016/j.cam.2024.115830
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Randomized Neural Networks (RNNs) are a variety of neural networks in which the hidden -layer parameters are fixed to randomly assigned values, and the output -layer parameters are obtained by solving a linear system through least squares. This improves the efficiency without degrading the accuracy of the neural network. In this paper, we combine the idea of the Local RNN (LRNN) and the Discontinuous Galerkin (DG) approach for solving partial differential equations. RNNs are used to approximate the solution on the subdomains, and the DG formulation is used to glue them together. Taking the Poisson problem as a model, we propose three numerical schemes and provide convergence analysis. Then we extend the ideas to time -dependent problems. Taking the heat equation as a model, three space-time LRNN with DG formulations are proposed. Finally, we present numerical tests to demonstrate the performance of the methods developed herein. We evaluate the performance of the proposed methods by comparing them with the finite element method and the conventional DG method. The LRNN-DG methods can achieve higher accuracy with the same degrees of freedom, and can solve time -dependent problems more precisely and efficiently. This indicates that this new approach has great potential for solving partial differential equations.
引用
收藏
页数:25
相关论文
共 50 条