Approximation bounds for convolutional neural networks in operator learning

被引:13
|
作者
Franco, Nicola Rares [1 ]
Fresca, Stefania [1 ]
Manzoni, Andrea [1 ]
Zunino, Paolo [1 ]
机构
[1] Politecn Milan, Math Dept, MOX, Piazza Leonardo Vinci 32, I-20133 Milan, Italy
关键词
Operator learning; Convolutional neural networks; Approximation theory;
D O I
10.1016/j.neunet.2023.01.029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foun-dations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input mu is an element of Rp onto a functional output u mu : [0, 1]d -> R, and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application.(c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:129 / 141
页数:13
相关论文
共 50 条
  • [31] Learning to Binarize Convolutional Neural Networks with Adaptive Neural Encoder
    Zhang, Shuai
    Ge, Fangyuan
    Ding, Rui
    Liu, Haijun
    Zhou, Xichuan
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [32] Accelerating Convolutional Neural Networks via Inter-operator Scheduling
    You, Yi
    Liu, Pangfeng
    Hong, Ding-Yong
    Wu, Jan-Jan
    Hsu, Wei-Chung
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 916 - 923
  • [33] Approximation bounds for smooth functions in C(IRd) by neural and mixture networks
    Maiorov, V
    Meir, RS
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (05): : 969 - 978
  • [34] MEMORY CAPACITY OF NEURAL NETWORKS LEARNING WITHIN BOUNDS
    GORDON, MB
    JOURNAL DE PHYSIQUE, 1987, 48 (12): : 2053 - 2058
  • [35] Approximation bounds for norm constrained neural networks with applications to regression and GANs
    Jiao, Yuling
    Wang, Yang
    Yang, Yunfei
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2023, 65 : 249 - 278
  • [36] Approximation of functions from Korobov spaces by deep convolutional neural networks
    Tong Mao
    Ding-Xuan Zhou
    Advances in Computational Mathematics, 2022, 48
  • [37] Towards Lossless Binary Convolutional Neural Networks Using Piecewise Approximation
    Zhu, Baozhou
    Al-Ars, Zaid
    Pan, Wei
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1730 - 1737
  • [38] Deep compression of convolutional neural networks with low-rank approximation
    Astrid, Marcella
    Lee, Seung-Ik
    ETRI JOURNAL, 2018, 40 (04) : 421 - 434
  • [39] AdaFlow: Aggressive Convolutional Neural Networks Approximation by Leveraging the Input Variability
    Lu, Wenyan
    Yan, Guihai
    Li, Xiaowei
    JOURNAL OF LOW POWER ELECTRONICS, 2018, 14 (04) : 481 - 495
  • [40] Approximation of functions from Korobov spaces by deep convolutional neural networks
    Mao, Tong
    Zhou, Ding-Xuan
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2022, 48 (06)