The capacity of feedforward neural networks

被引:39
|
作者
Baldi, Pierre [1 ]
Vershynin, Roman [2 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
[2] Univ Calif Irvine, Dept Math, Irvine, CA 92717 USA
基金
美国国家科学基金会;
关键词
Neural networks; Capacity; Complexity; Deep learning; BOUNDS;
D O I
10.1016/j.neunet.2019.04.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A long standing open problem in the theory of neural networks is the development of quantitative methods to estimate and compare the capabilities of different architectures. Here we define the capacity of an architecture by the binary logarithm of the number of functions it can compute, as the synaptic weights are varied. The capacity provides an upperbound on the number of bits that can be extracted from the training data and stored in the architecture during learning. We study the capacity of layered, fully-connected, architectures of linear threshold neurons with L layers of size n(1), n(2),..., n(L) and show that in essence the capacity is given by a cubic polynomial in the layer sizes: C(n(1),..., n(L)) = Sigma(k=1) (L-1) min(n(1),..., n(k))n(k)n(k+1), where layers that are smaller than all previous layers act as bottlenecks. In proving the main result, we also develop new techniques (multiplexing, enrichment, and stacking) as well as new bounds on the capacity of finite sets. We use the main result to identify architectures with maximal or minimal capacity under a number of natural constraints. This leads to the notion of structural regularization for deep architectures. While in general, everything else being equal, shallow networks compute more functions than deep networks, the functions computed by deep networks are more regular and "interesting''. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:288 / 311
页数:24
相关论文
共 50 条
  • [1] Capacity of two-layer feedforward neural networks with binary weights
    Ji, CY
    Psaltis, D
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (01) : 256 - 268
  • [2] Approximating polynomial functions by feedforward artificial neural networks: Capacity analysis and design
    Malakooti, B
    Zhou, YQ
    [J]. APPLIED MATHEMATICS AND COMPUTATION, 1998, 90 (01) : 27 - 51
  • [3] ON TRAINING FEEDFORWARD NEURAL NETWORKS
    KAK, S
    [J]. PRAMANA-JOURNAL OF PHYSICS, 1993, 40 (01): : 35 - 42
  • [4] Optimization of feedforward neural networks
    Han, J
    Moraga, C
    Sinne, S
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 1996, 9 (02) : 109 - 119
  • [5] PROPERTIES OF FEEDFORWARD NEURAL NETWORKS
    BUDINICH, M
    MILOTTI, E
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1992, 25 (07): : 1903 - 1914
  • [6] Oscillation Characteristics of Feedforward Neural Networks
    Li, Yudi
    Wu, Aiguo
    Dong, Na
    Du, Lijia
    Chai, Yi
    [J]. 2018 13TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2018, : 1074 - 1079
  • [7] Randomized Algorithms for Feedforward Neural Networks
    Li Fan-jun
    Li Ying
    [J]. PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE 2016, 2016, : 3664 - 3668
  • [8] Channel equalization by feedforward neural networks
    Lu, B
    Evans, BL
    [J]. ISCAS '99: PROCEEDINGS OF THE 1999 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOL 5: SYSTEMS, POWER ELECTRONICS, AND NEURAL NETWORKS, 1999, : 587 - 590
  • [9] Feedforward neural networks for compound signals
    Szczuka, Marcin
    Slezak, Dominik
    [J]. THEORETICAL COMPUTER SCIENCE, 2011, 412 (42) : 5960 - 5973
  • [10] Interpolation functions of feedforward neural networks
    Li, HX
    Lee, ES
    [J]. COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2003, 46 (12) : 1861 - 1874