Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent

被引:0
|
作者
Holzmueller, David [1 ]
Steinwart, Ingo [1 ]
机构
[1] Univ Stuttgart, Fac Math & Phys, Inst Stochast & Applicat, Stuttgart, Germany
关键词
Neural networks; consistency; gradient descent; initialization; neural tangent kernel; LOCAL MINIMA; CONSISTENCY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We prove that two-layer (Leaky)ReLU networks initialized by e.g. the widely used method proposed by He et al. (2015) and trained using gradient descent on a least-squares loss are not universally consistent. Specifically, we describe a large class of one-dimensional data-generating distributions for which, with high probability, gradient descent only finds a bad local minimum of the optimization landscape, since it is unable to move the biases far away from their initialization at zero. It turns out that in these cases, the found network essentially performs linear regression even if the target function is non-linear. We further provide numerical evidence that this happens in practical situations, for some multi-dimensional distributions and that stochastic gradient descent exhibits similar behavior. We also provide empirical results on how the choice of initialization and optimizer can influence this behavior.
引用
收藏
页数:82
相关论文
共 50 条
  • [1] Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
    Holzmüller, David
    Steinwart, Ingo
    [J]. Journal of Machine Learning Research, 2022, 23
  • [2] Implicit Bias of Gradient Descent for Two-layer ReLU and Leaky ReLU Networks on Nearly-orthogonal Data
    Kou, Yiwen
    Chen, Zixiang
    Gu, Quanquan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Training a Two-Layer ReLU Network Analytically
    Barbu, Adrian
    [J]. SENSORS, 2023, 23 (08)
  • [4] HIDDEN MINIMA IN TWO-LAYER RELU NETWORKS
    Arjevani, Yossi
    [J]. arXiv, 2023,
  • [5] Annihilation of Spurious Minima in Two-Layer ReLU Networks
    Arjevani, Yossi
    Field, Michael
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [6] Convergence Analysis of Two-layer Neural Networks with ReLU Activation
    Li, Yuanzhi
    Yuan, Yang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] A Global Universality of Two-Layer Neural Networks with ReLU Activations
    Hatano, Naoya
    Ikeda, Masahiro
    Ishikawa, Isao
    Sawano, Yoshihiro
    [J]. JOURNAL OF FUNCTION SPACES, 2021, 2021
  • [8] PLATEAU PHENOMENON IN GRADIENT DESCENT TRAINING OF RELU NETWORKS: EXPLANATION, QUANTIFICATION, AND AVOIDANCE
    Ainsworth, Mark
    Shin, Yeonjong
    [J]. SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2021, 43 (05): : A3438 - A3468
  • [9] Learning One-hidden-layer ReLU Networks via Gradient Descent
    Zhang, Xiao
    Yu, Yaodong
    Wang, Lingxiao
    Gu, Quanquan
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [10] Gradient Descent Provably Escapes Saddle Points in the Training of Shallow ReLU Networks
    Cheridito, Patrick
    Jentzen, Arnulf
    Rossmannek, Florian
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, : 2617 - 2648