Vanishing Curvature in Randomly Initialized Deep ReLU Networks

被引:0
|
作者
Orvieto, Antonio [1 ]
Kohler, Jonas [1 ]
Pavllo, Dario [1 ]
Hofmann, Thomas [1 ]
Lucchi, Aurelien [2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] Univ Basel, Basel, Switzerland
关键词
GAMMA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep ReLU networks are at the basis of many modern neural architectures. Yet, the loss landscape of such networks and its interaction with state-of-the-art optimizers is not fully understood. One of the most crucial aspects is the landscape at random initialization, which often influences convergence speed dramatically. In their seminal works, Xavier & Bengio, 2010 and He et al., 2015 propose an initialization strategy that is supposed to prevent gradients from vanishing. Yet, we identify shortcomings of their expectation analysis as network depth increases, and show that the proposed initialization can actually fail to deliver stable gradient norms. More precisely, by leveraging an in-depth analysis of the median of the forward pass, we first show that, with high probability, vanishing gradients cannot be circumvented when the network width scales with less than Omega(depth). Second, we extend this analysis to second-order derivatives and show that random i.i.d. initialization also gives rise to Hessian matrices with eigenspectra that vanish in depth. Whenever this happens, optimizers are initialized in a very flat, saddle pointlike plateau, which is particularly hard to escape with stochastic gradient descent (SGD) as its escaping time is inversely related to curvature magnitudes. We believe that this observation is crucial for fully understanding (a) the historical difficulties of training deep nets with vanilla SGD and (b) the success of adaptive gradient methods, which naturally adapt to curvature and thus quickly escape flat plateaus.
引用
收藏
页数:34
相关论文
共 50 条
  • [1] Quantitative Gaussian approximation of randomly initialized deep neural networks
    Basteri, Andrea
    Trevisan, Dario
    MACHINE LEARNING, 2024, 113 (09) : 6373 - 6393
  • [2] On Scrambling Phenomena for Randomly Initialized Recurrent Networks
    Chatziafratis, Vaggos
    Panageas, Ioannis
    Sanford, Clayton
    Stavroulakis, Stelios Andrew
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Pruning Randomly Initialized Neural Networks with Iterative Randomization
    Chijiwa, Daiki
    Yamaguchi, Shin'ya
    Ida, Yasutoshi
    Umakoshi, Kenji
    Inoue, Tomohiro
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Convergence of deep ReLU networks
    Xu, Yuesheng
    Zhang, Haizhang
    NEUROCOMPUTING, 2024, 571
  • [5] Demystifying Randomly Initialized Networks for Evaluating Generative Models
    Lee, Junghyuk
    Kim, Jun-Hyuk
    Lee, Jong-Seok
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8482 - 8490
  • [6] Pruning Randomly Initialized Neural Networks with Iterative Randomization
    Chijiwa, Daiki
    Yamaguchi, Shinya
    Ida, Yasutoshi
    Umakoshi, Kenji
    Inoue, Tomohiro
    arXiv, 2021,
  • [7] Pruning Randomly Initialized Neural Networks with Iterative Randomization
    Chijiwa, Daiki
    Yamaguchi, Shin'ya
    Ida, Yasutoshi
    Umakoshi, Kenji
    Inoue, Tomohiro
    Advances in Neural Information Processing Systems, 2021, 6 : 4503 - 4513
  • [8] Robust Binary Models by Pruning Randomly-initialized Networks
    Liu, Chen
    Zhao, Ziqi
    Susstrunk, Sabine
    Salzmann, Mathieu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Nonlinear Approximation and (Deep) ReLU Networks
    Daubechies, I.
    DeVore, R.
    Foucart, S.
    Hanin, B.
    Petrova, G.
    CONSTRUCTIVE APPROXIMATION, 2022, 55 (01) : 127 - 172
  • [10] Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization
    Jakub, Cameron
    Nica, Mihai
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 45