Vanishing Curvature in Randomly Initialized Deep ReLU Networks

被引:0
|
作者
Orvieto, Antonio [1 ]
Kohler, Jonas [1 ]
Pavllo, Dario [1 ]
Hofmann, Thomas [1 ]
Lucchi, Aurelien [2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] Univ Basel, Basel, Switzerland
关键词
GAMMA;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep ReLU networks are at the basis of many modern neural architectures. Yet, the loss landscape of such networks and its interaction with state-of-the-art optimizers is not fully understood. One of the most crucial aspects is the landscape at random initialization, which often influences convergence speed dramatically. In their seminal works, Xavier & Bengio, 2010 and He et al., 2015 propose an initialization strategy that is supposed to prevent gradients from vanishing. Yet, we identify shortcomings of their expectation analysis as network depth increases, and show that the proposed initialization can actually fail to deliver stable gradient norms. More precisely, by leveraging an in-depth analysis of the median of the forward pass, we first show that, with high probability, vanishing gradients cannot be circumvented when the network width scales with less than Omega(depth). Second, we extend this analysis to second-order derivatives and show that random i.i.d. initialization also gives rise to Hessian matrices with eigenspectra that vanish in depth. Whenever this happens, optimizers are initialized in a very flat, saddle pointlike plateau, which is particularly hard to escape with stochastic gradient descent (SGD) as its escaping time is inversely related to curvature magnitudes. We believe that this observation is crucial for fully understanding (a) the historical difficulties of training deep nets with vanilla SGD and (b) the success of adaptive gradient methods, which naturally adapt to curvature and thus quickly escape flat plateaus.
引用
收藏
页数:34
相关论文
共 50 条
  • [31] Deep ReLU Networks Have Surprisingly Few Activation Patterns
    Hanin, Boris
    Rolnick, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [32] On Centralization and Unitization of Batch Normalization for Deep ReLU Neural Networks
    Fei, Wen
    Dai, Wenrui
    Li, Chenglin
    Zou, Junni
    Xiong, Hongkai
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 2827 - 2841
  • [33] Deep ReLU neural networks in high-dimensional approximation
    Dung, Dinh
    Nguyen, Van Kien
    NEURAL NETWORKS, 2021, 142 : 619 - 635
  • [34] ReLU deep neural networks from the hierarchical basis perspective
    He, Juncai
    Li, Lin
    Xu, Jinchao
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2022, 120 : 105 - 114
  • [35] HOW DO NOISE TAILS IMPACT ON DEEP RELU NETWORKS?
    Fan, Jianqian
    Gu, Yihong
    Zhou, Wen-Xin
    ANNALS OF STATISTICS, 2024, 52 (04): : 1845 - 1871
  • [36] Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
    Fu, Yonggan
    Yu, Qixuan
    Zhang, Yang
    Wu, Shang
    Ouyang, Xu
    Cox, David
    Lin, Yingyan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [37] Learning Functions Generated by Randomly Initialized MLPs and SRNs
    Cleaver, Ryan
    Venayagamoorthy, Ganesh Kumar
    CICA: 2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN CONTROL AND AUTOMATION, 2009, : 62 - 69
  • [38] Gradient descent optimizes over-parameterized deep ReLU networks
    Difan Zou
    Yuan Cao
    Dongruo Zhou
    Quanquan Gu
    Machine Learning, 2020, 109 : 467 - 492
  • [39] On the CVP for the root lattices via folding with deep ReLU neural networks
    Corlay, Vincent
    Boutros, Joseph J.
    Ciblat, Philippe
    Brunel, Loic
    2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2019, : 1622 - 1626
  • [40] DEEP RELU NETWORKS OVERCOME THE CURSE OF DIMENSIONALITY FOR GENERALIZED BANDLIMITED FUNCTIONS
    Montanelli, Hadrien
    Yang, Haizhao
    Du, Qiang
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2021, 39 (06): : 801 - 815