Deep Limits and a Cut-Off Phenomenon for Neural Networks

被引:0
|
作者
Avelin, Benny [1 ]
Karlsson, Anders [1 ,2 ]
机构
[1] Uppsala Univ, Dept Math, Box 256, S-75105 Uppsala, Sweden
[2] Univ Geneva, Sect Math, Case Postale 64, CH-1211 Geneva 4, Switzerland
基金
瑞典研究理事会; 瑞士国家科学基金会;
关键词
deep limits; neural network; deep learning; ergodic theory; metric geometry;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider dynamical and geometrical aspects of deep learning. For many standard choices of layer maps we display semi-invariant metrics which quantify differences between data or decision functions. This allows us, when considering random layer maps and using non-commutative ergodic theorems, to deduce that certain limits exist when letting the number of layers tend to infinity. We also examine the random initialization of standard networks where we observe a surprising cut-off phenomenon in terms of the number of layers, the depth of the network. This could be a relevant parameter when choosing an appropriate number of layers for a given learning task, or for selecting a good initialization procedure. More generally, we hope that the notions and results in this paper can provide a framework, in particular a geometric one, for a part of the theoretical understanding of deep neural networks.
引用
收藏
页数:29
相关论文
共 50 条