On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths

被引:0
|
作者
Quynh Nguyen [1 ]
机构
[1] MPI MIS, Leipzig, Germany
基金
欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We give a simple proof for the global convergence of gradient descent in training deep ReLU networks with the standard square loss, and show some of its improvements over the state-of-the-art. In particular, while prior works require all the hidden layers to be wide with width at least Omega (N-8) (N being the number of training samples), we require a single wide layer of linear, quadratic or cubic width depending on the type of initialization. Unlike many recent proofs based on the Neural Tangent Kernel (NTK), our proof need not track the evolution of the entire NTK matrix, or more generally, any quantities related to the changes of activation patterns during training Instead, we only need to track the evolution of the output at the last hidden layer, which can be done much more easily thanks to the Lipschitz property of ReLU. Some highlights of our setting: (i) all the layers are trained with standard gradient descent, (ii) the network has standard parameterization as opposed to the NTK one, and (iii) the network has a single wide layer as opposed to having all wide hidden layers as in most of NTK-related results.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Global Convergence of Gradient Descent for Deep Linear Residual Networks
    Wu, Lei
    Wang, Qingcan
    Ma, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
    Jentzen, Arnulf
    Riekert, Adrian
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [3] A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
    Jentzen, Arnulf
    Riekert, Adrian
    ZEITSCHRIFT FUR ANGEWANDTE MATHEMATIK UND PHYSIK, 2022, 73 (05):
  • [4] A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
    Arnulf Jentzen
    Adrian Riekert
    Zeitschrift für angewandte Mathematik und Physik, 2022, 73
  • [5] Gradient descent optimizes over-parameterized deep ReLU networks
    Difan Zou
    Yuan Cao
    Dongruo Zhou
    Quanquan Gu
    Machine Learning, 2020, 109 : 467 - 492
  • [6] Gradient descent optimizes over-parameterized deep ReLU networks
    Zou, Difan
    Cao, Yuan
    Zhou, Dongruo
    Gu, Quanquan
    MACHINE LEARNING, 2020, 109 (03) : 467 - 492
  • [7] Convergence of deep ReLU networks
    Xu, Yuesheng
    Zhang, Haizhang
    NEUROCOMPUTING, 2024, 571
  • [8] Convergence of gradient descent for learning linear neural networks
    Nguegnang, Gabin Maxime
    Rauhut, Holger
    Terstiege, Ulrich
    ADVANCES IN CONTINUOUS AND DISCRETE MODELS, 2024, 2024 (01):
  • [9] Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks
    Shamir, Ohad
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [10] A proof of convergence for gradient descent in the training of artificial neural networks for constant functions
    Cheridito, Patrick
    Jentzen, Arnulf
    Riekert, Adrian
    Rossmannek, Florian
    JOURNAL OF COMPLEXITY, 2022, 72