On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths

被引:0
|
作者
Quynh Nguyen [1 ]
机构
[1] MPI MIS, Leipzig, Germany
基金
欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We give a simple proof for the global convergence of gradient descent in training deep ReLU networks with the standard square loss, and show some of its improvements over the state-of-the-art. In particular, while prior works require all the hidden layers to be wide with width at least Omega (N-8) (N being the number of training samples), we require a single wide layer of linear, quadratic or cubic width depending on the type of initialization. Unlike many recent proofs based on the Neural Tangent Kernel (NTK), our proof need not track the evolution of the entire NTK matrix, or more generally, any quantities related to the changes of activation patterns during training Instead, we only need to track the evolution of the output at the last hidden layer, which can be done much more easily thanks to the Lipschitz property of ReLU. Some highlights of our setting: (i) all the layers are trained with standard gradient descent, (ii) the network has standard parameterization as opposed to the NTK one, and (iii) the network has a single wide layer as opposed to having all wide hidden layers as in most of NTK-related results.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Global convergence of a descent nonlinear conjugate gradient method
    Li, Xiaoyong
    Liu, Hailin
    ICMS2010: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON MODELLING AND SIMULATION, VOL 1: ENGINEERING COMPUTATION AND FINITE ELEMENT ANALYSIS, 2010, : 79 - 84
  • [32] The global convergence of a descent PRP conjugate gradient method
    Li, Min
    Feng, Heying
    Liu, Jianguo
    Computational and Applied Mathematics, 2012, 31 (01) : 59 - 83
  • [33] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Zhou, Bai-cun
    Han, Cong-ying
    Guo, Tian-de
    ACTA MATHEMATICAE APPLICATAE SINICA-ENGLISH SERIES, 2021, 37 (01): : 126 - 136
  • [34] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Bai-cun ZHOU
    Cong-ying HAN
    Tian-de GUO
    ActaMathematicaeApplicataeSinica, 2021, 37 (01) : 126 - 136
  • [35] RELU DEEP NEURAL NETWORKS AND LINEAR FINITE ELEMENTS
    He, Juncai
    Li, Lin
    Xu, Jinchao
    Zheng, Chunyue
    JOURNAL OF COMPUTATIONAL MATHEMATICS, 2020, 38 (03) : 502 - 527
  • [36] Implicit Bias of Gradient Descent for Two-layer ReLU and Leaky ReLU Networks on Nearly-orthogonal Data
    Kou, Yiwen
    Chen, Zixiang
    Gu, Quanquan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [37] A Convergence Analysis of Gradient Descent on Graph Neural Networks
    Awasthi, Pranjal
    Das, Abhimanyu
    Gollapudi, Sreenivas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [38] Complexity control by gradient descent in deep networks
    Tomaso Poggio
    Qianli Liao
    Andrzej Banburski
    Nature Communications, 11
  • [39] Complexity control by gradient descent in deep networks
    Poggio, Tomaso
    Liao, Qianli
    Banburski, Andrzej
    NATURE COMMUNICATIONS, 2020, 11 (01)
  • [40] Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
    Jentzen, Arnulf
    Riekert, Adrian
    JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2023, 517 (02)