Vanishing Curvature in Randomly Initialized Deep ReLU Networks

被引:0
|
作者
Orvieto, Antonio [1 ]
Kohler, Jonas [1 ]
Pavllo, Dario [1 ]
Hofmann, Thomas [1 ]
Lucchi, Aurelien [2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] Univ Basel, Basel, Switzerland
关键词
GAMMA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep ReLU networks are at the basis of many modern neural architectures. Yet, the loss landscape of such networks and its interaction with state-of-the-art optimizers is not fully understood. One of the most crucial aspects is the landscape at random initialization, which often influences convergence speed dramatically. In their seminal works, Xavier & Bengio, 2010 and He et al., 2015 propose an initialization strategy that is supposed to prevent gradients from vanishing. Yet, we identify shortcomings of their expectation analysis as network depth increases, and show that the proposed initialization can actually fail to deliver stable gradient norms. More precisely, by leveraging an in-depth analysis of the median of the forward pass, we first show that, with high probability, vanishing gradients cannot be circumvented when the network width scales with less than Omega(depth). Second, we extend this analysis to second-order derivatives and show that random i.i.d. initialization also gives rise to Hessian matrices with eigenspectra that vanish in depth. Whenever this happens, optimizers are initialized in a very flat, saddle pointlike plateau, which is particularly hard to escape with stochastic gradient descent (SGD) as its escaping time is inversely related to curvature magnitudes. We believe that this observation is crucial for fully understanding (a) the historical difficulties of training deep nets with vanilla SGD and (b) the success of adaptive gradient methods, which naturally adapt to curvature and thus quickly escape flat plateaus.
引用
收藏
页数:34
相关论文
共 50 条
  • [41] On the uniform approximation estimation of deep ReLU networks via frequency decomposition
    Chen, Liang
    Liu, Wenjun
    AIMS MATHEMATICS, 2022, 7 (10): : 19018 - 19025
  • [42] Deep ReLU networks and high-order finite element methods
    Opschoor, Joost A. A.
    Petersen, Philipp C.
    Schwab, Christoph
    ANALYSIS AND APPLICATIONS, 2020, 18 (05) : 715 - 770
  • [43] Gradient descent optimizes over-parameterized deep ReLU networks
    Zou, Difan
    Cao, Yuan
    Zhou, Dongruo
    Gu, Quanquan
    MACHINE LEARNING, 2020, 109 (03) : 467 - 492
  • [44] New Error Bounds for Deep ReLU Networks Using Sparse Grids
    Montanelli, Hadrien
    Du, Qiang
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2019, 1 (01): : 78 - 92
  • [45] Randomly translational activation inspired by the input distributions of ReLU
    Cao, Jiale
    Pang, Yanwei
    Li, Xuelong
    Liang, Jingkun
    NEUROCOMPUTING, 2018, 275 : 859 - 868
  • [46] Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds
    Chen, Minshuo
    Jiang, Haoming
    Liao, Wenjing
    Zhao, Tuo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [47] NONPARAMETRIC REGRESSION USING DEEP NEURAL NETWORKS WITH RELU ACTIVATION FUNCTION
    Schmidt-Hieber, Johannes
    ANNALS OF STATISTICS, 2020, 48 (04): : 1875 - 1897
  • [48] Approximation in shift-invariant spaces with deep ReLU neural networks
    Yang, Yunfei
    Li, Zhen
    Wang, Yang
    NEURAL NETWORKS, 2022, 153 : 269 - 281
  • [49] Trajectory growth lower bounds for random sparse deep ReLU networks
    Price, Ilan
    Tanner, Jared
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 1004 - 1009
  • [50] On sprays with vanishing χ-curvature
    Shen, Zhongmin
    INTERNATIONAL JOURNAL OF MATHEMATICS, 2021, 32 (10)