On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths

被引:0
|
作者
Quynh Nguyen [1 ]
机构
[1] MPI MIS, Leipzig, Germany
基金
欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We give a simple proof for the global convergence of gradient descent in training deep ReLU networks with the standard square loss, and show some of its improvements over the state-of-the-art. In particular, while prior works require all the hidden layers to be wide with width at least Omega (N-8) (N being the number of training samples), we require a single wide layer of linear, quadratic or cubic width depending on the type of initialization. Unlike many recent proofs based on the Neural Tangent Kernel (NTK), our proof need not track the evolution of the entire NTK matrix, or more generally, any quantities related to the changes of activation patterns during training Instead, we only need to track the evolution of the output at the last hidden layer, which can be done much more easily thanks to the Lipschitz property of ReLU. Some highlights of our setting: (i) all the layers are trained with standard gradient descent, (ii) the network has standard parameterization as opposed to the NTK one, and (iii) the network has a single wide layer as opposed to having all wide hidden layers as in most of NTK-related results.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Learning One-hidden-layer ReLU Networks via Gradient Descent
    Zhang, Xiao
    Yu, Yaodong
    Wang, Lingxiao
    Gu, Quanquan
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [22] PLATEAU PHENOMENON IN GRADIENT DESCENT TRAINING OF RELU NETWORKS: EXPLANATION, QUANTIFICATION, AND AVOIDANCE
    Ainsworth, Mark
    Shin, Yeonjong
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2021, 43 (05): : A3438 - A3468
  • [23] Gradient Descent Provably Escapes Saddle Points in the Training of Shallow ReLU Networks
    Cheridito, Patrick
    Jentzen, Arnulf
    Rossmannek, Florian
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 203 (03) : 2617 - 2648
  • [24] Impact of Mathematical Norms on Convergence of Gradient Descent Algorithms for Deep Neural Networks Learning
    Cai, Linzhe
    Yu, Xinghuo
    Li, Chaojie
    Eberhard, Andrew
    Lien Thuy Nguyen
    Chuong Thai Doan
    AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 131 - 144
  • [25] Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
    An, Jing
    Lu, Jianfeng
    arXiv, 2023,
  • [26] Unboundedness of Linear Regions of Deep ReLU Neural Networks
    Ponomarchuk, Anton
    Koutschan, Christoph
    Moser, Bernhard
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2022 WORKSHOPS, 2022, 1633 : 3 - 10
  • [27] Linear Convergence of Gradient Descent for Finite Width Over-parametrized Linear Networks with General Initialization
    Xu, Ziqing
    Min, Hancheng
    Tarmoun, Salma
    Mallada, Enrique
    Vidal, Rene
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [28] On the convergence of gradient descent for robust functional linear regression
    Wang, Cheng
    Fan, Jun
    JOURNAL OF COMPLEXITY, 2024, 84
  • [29] The global convergence of a descent PRP conjugate gradient method
    Li, Min
    Feng, Heying
    Liu, Jianguo
    COMPUTATIONAL & APPLIED MATHEMATICS, 2012, 31 (01): : 59 - 83
  • [30] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Bai-cun Zhou
    Cong-ying Han
    Tian-de Guo
    Acta Mathematicae Applicatae Sinica, English Series, 2021, 37 : 126 - 136