The capacity of feedforward neural networks

被引:39
|
作者
Baldi, Pierre [1 ]
Vershynin, Roman [2 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
[2] Univ Calif Irvine, Dept Math, Irvine, CA 92717 USA
基金
美国国家科学基金会;
关键词
Neural networks; Capacity; Complexity; Deep learning; BOUNDS;
D O I
10.1016/j.neunet.2019.04.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A long standing open problem in the theory of neural networks is the development of quantitative methods to estimate and compare the capabilities of different architectures. Here we define the capacity of an architecture by the binary logarithm of the number of functions it can compute, as the synaptic weights are varied. The capacity provides an upperbound on the number of bits that can be extracted from the training data and stored in the architecture during learning. We study the capacity of layered, fully-connected, architectures of linear threshold neurons with L layers of size n(1), n(2),..., n(L) and show that in essence the capacity is given by a cubic polynomial in the layer sizes: C(n(1),..., n(L)) = Sigma(k=1) (L-1) min(n(1),..., n(k))n(k)n(k+1), where layers that are smaller than all previous layers act as bottlenecks. In proving the main result, we also develop new techniques (multiplexing, enrichment, and stacking) as well as new bounds on the capacity of finite sets. We use the main result to identify architectures with maximal or minimal capacity under a number of natural constraints. This leads to the notion of structural regularization for deep architectures. While in general, everything else being equal, shallow networks compute more functions than deep networks, the functions computed by deep networks are more regular and "interesting''. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:288 / 311
页数:24
相关论文
共 50 条
  • [31] Weighted Learning for Feedforward Neural Networks
    Rong-Fang Xu
    Thao-Tsen Chen
    Shie-Jue Lee
    [J]. JournalofElectronicScienceandTechnology., 2014, 12 (03) - 304
  • [32] FEEDFORWARD NEURAL NETWORKS - A GEOMETRICAL PERSPECTIVE
    BUDINICH, M
    MILOTTI, E
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1991, 24 (04): : 881 - 888
  • [33] A constructive algorithm for feedforward neural networks
    Institute of System Science, East China Normal University
    不详
    不详
    [J]. 1600, 659-664 (2004):
  • [34] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    [J]. INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311
  • [35] Quantum neural networks (QNNs): Inherently fuzzy feedforward neural networks
    Purushothaman, G
    Karayiannis, NB
    [J]. ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 1085 - 1090
  • [36] Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks
    Liu, Ying
    Yang, Peng-Fei
    Zhang, Li-Jun
    Wu, Zhi-Lin
    Feng, Yuan
    [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (07): : 1 - 33
  • [37] Protein Prediction with Neural Networks: FeedForward Networks Recurring Networks
    Cardenas Quintero, Beitmantt Geovanni
    [J]. REVISTA FACULTAD DE INGENIERIA, UNIVERSIDAD PEDAGOGICA Y TECNOLOGICA DE COLOMBIA, 2007, 16 (23): : 75 - 87
  • [38] Riemannian metrics for neural networks I: feedforward networks
    Ollivier, Yann
    [J]. INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2015, 4 (02) : 108 - 153
  • [39] Partially connected feedforward neural networks on Apollonian networks
    Wong, W. K.
    Guo, Z. X.
    Leung, S. Y. S.
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2010, 389 (22) : 5298 - 5307
  • [40] Quantum neural networks versus conventional feedforward neural networks:: An experimental study
    Kretzschmar, R
    Büeler, R
    Karayiannis, NB
    Eggimann, F
    [J]. NEURAL NETWORKS FOR SIGNAL PROCESSING X, VOLS 1 AND 2, PROCEEDINGS, 2000, : 328 - 337