Vanishing Curvature in Randomly Initialized Deep ReLU Networks

被引:0
|
作者
Orvieto, Antonio [1 ]
Kohler, Jonas [1 ]
Pavllo, Dario [1 ]
Hofmann, Thomas [1 ]
Lucchi, Aurelien [2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] Univ Basel, Basel, Switzerland
关键词
GAMMA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep ReLU networks are at the basis of many modern neural architectures. Yet, the loss landscape of such networks and its interaction with state-of-the-art optimizers is not fully understood. One of the most crucial aspects is the landscape at random initialization, which often influences convergence speed dramatically. In their seminal works, Xavier & Bengio, 2010 and He et al., 2015 propose an initialization strategy that is supposed to prevent gradients from vanishing. Yet, we identify shortcomings of their expectation analysis as network depth increases, and show that the proposed initialization can actually fail to deliver stable gradient norms. More precisely, by leveraging an in-depth analysis of the median of the forward pass, we first show that, with high probability, vanishing gradients cannot be circumvented when the network width scales with less than Omega(depth). Second, we extend this analysis to second-order derivatives and show that random i.i.d. initialization also gives rise to Hessian matrices with eigenspectra that vanish in depth. Whenever this happens, optimizers are initialized in a very flat, saddle pointlike plateau, which is particularly hard to escape with stochastic gradient descent (SGD) as its escaping time is inversely related to curvature magnitudes. We believe that this observation is crucial for fully understanding (a) the historical difficulties of training deep nets with vanilla SGD and (b) the success of adaptive gradient methods, which naturally adapt to curvature and thus quickly escape flat plateaus.
引用
收藏
页数:34
相关论文
共 50 条
  • [21] Convergence rates of deep ReLU networks for multiclass classification
    Bos, Thijs
    Schmidt-Hieber, Johannes
    ELECTRONIC JOURNAL OF STATISTICS, 2022, 16 (01): : 2724 - 2773
  • [22] Decision Boundary Formation of Deep Convolution Networks with ReLU
    Lee, C.
    Woo, S.
    2018 16TH IEEE INT CONF ON DEPENDABLE, AUTONOM AND SECURE COMP, 16TH IEEE INT CONF ON PERVAS INTELLIGENCE AND COMP, 4TH IEEE INT CONF ON BIG DATA INTELLIGENCE AND COMP, 3RD IEEE CYBER SCI AND TECHNOL CONGRESS (DASC/PICOM/DATACOM/CYBERSCITECH), 2018, : 885 - 888
  • [23] Towards Quantifying Intrinsic Generalization of Deep ReLU Networks
    Salman, Shaeke
    Zhang, Canlin
    Liu, Xiuwen
    Mio, Washington
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [24] Approximation of smooth functionals using deep ReLU networks
    Song, Linhao
    Liu, Ying
    Fan, Jun
    Zhou, Ding-Xuan
    NEURAL NETWORKS, 2023, 166 : 424 - 436
  • [25] RELU DEEP NEURAL NETWORKS AND LINEAR FINITE ELEMENTS
    He, Juncai
    Li, Lin
    Xu, Jinchao
    Zheng, Chunyue
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2020, 38 (03) : 502 - 527
  • [26] Local Identifiability of Deep ReLU Neural Networks: the Theory
    Bona-Pellissier, Joachim
    Malgouyres, Francois
    Bachoc, Francois
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] Learning Deep ReLU Networks Is Fixed-Parameter Tractable
    Chen, Sitan
    Klivans, Adam R.
    Meka, Raghu
    2021 IEEE 62ND ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2021), 2022, : 696 - 707
  • [28] Correction: Approximation of Nonlinear Functionals Using Deep ReLU Networks
    Linhao Song
    Jun Fan
    Di-Rong Chen
    Ding-Xuan Zhou
    Journal of Fourier Analysis and Applications, 2023, 29
  • [29] Robust nonparametric regression based on deep ReLU neural networks
    Chen, Juntong
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2024, 233
  • [30] Precise characterization of the prior predictive distribution of deep ReLU networks
    Noci, Lorenzo
    Bachmann, Gregor
    Roth, Kevin
    Nowozin, Sebastian
    Hofmann, Thomas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34