The universal approximation theorem for complex-valued neural networks

被引:17
|
作者
Voigtlaender, Felix [1 ,2 ,3 ]
机构
[1] Tech Univ Munich, Dept Math, D-85748 Garching, Germany
[2] Univ Vienna, Fac Math, Oskar Morgenstern Pl 1, A-1090 Vienna, Austria
[3] Catholic Univ Eichstatt Ingolstadt KU, Math Inst Machine Learning & Data Sci MIDS, Schanz 49, D-85049 Ingolstadt, Germany
关键词
Complex-valued neural networks; Universal approximation theorem; Deep neural networks; Polyharmonic functions; Holomorphic functions; MULTILAYER FEEDFORWARD NETWORKS; SMOOTH;
D O I
10.1016/j.acha.2022.12.002
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks. Precisely, we consider feedforward networks with a complex activation function Sigma : C-+ C in which each neuron per -forms the operation CN-+ C, z-+ Sigma(b +wT z) with weights w E CN and a bias b E C. We completely characterize those activation functions Sigma for which the asso-ciated complex networks have the universal approximation property, meaning that they can uniformly approximate any continuous function on any compact subset of Cd arbitrarily well. Unlike the classical case of real networks, the set of "good activation functions"-which give rise to networks with the universal approximation property-differs significantly depending on whether one considers deep networks or shallow networks: For deep networks with at least two hidden layers, the universal approximation property holds as long as Sigma is neither a polynomial, a holomorphic function, nor an antiholomorphic function. Shallow networks, on the other hand, are universal if and only if the real part or the imaginary part of Sigma is not a polyharmonic function.(c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:33 / 61
页数:29
相关论文
共 50 条
  • [31] Orthogonality of decision boundaries in complex-valued neural networks
    Nitta, T
    [J]. NEURAL COMPUTATION, 2004, 16 (01) : 73 - 97
  • [32] Relaxation of the stability condition of the complex-valued neural networks
    Lee, DL
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (05): : 1260 - 1262
  • [33] FRACTAL VARIATION OF ATTRACTORS IN COMPLEX-VALUED NEURAL NETWORKS
    HIROSE, A
    [J]. NEURAL PROCESSING LETTERS, 1994, 1 (01) : 6 - 8
  • [34] An Introduction to Complex-Valued Recurrent Correlation Neural Networks
    Valle, Marcos Eduardo
    [J]. PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 3387 - 3394
  • [35] Multistability of complex-valued neural networks with distributed delays
    Gong, Weiqiang
    Liang, Jinling
    Zhang, Congjun
    [J]. NEURAL COMPUTING & APPLICATIONS, 2017, 28 : S1 - S14
  • [36] An augmented CRTRL for complex-valued recurrent neural networks
    Goh, Su Lee
    Mandic, Danilo P.
    [J]. NEURAL NETWORKS, 2007, 20 (10) : 1061 - 1066
  • [37] Multistability and Multiperiodicity Analysis of Complex-Valued Neural Networks
    Hu, Jin
    Wang, Jun
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2014, 2014, 8866 : 59 - 68
  • [38] Representation of complex-valued neural networks: A real-valued approach
    Yadav, A
    Mishra, D
    Ray, S
    Yadav, RN
    Kalra, PK
    [J]. 2005 International Conference on Intelligent Sensing and Information Processing, Proceedings, 2005, : 331 - 335
  • [39] A Structural Optimization Algorithm for Complex-Valued Neural Networks
    Dong, Zhongying
    Huang, He
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1530 - 1535
  • [40] A complex-valued RTRL algorithm for recurrent neural networks
    Goh, SL
    Mandic, DP
    [J]. NEURAL COMPUTATION, 2004, 16 (12) : 2699 - 2713