On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths

被引:0
|
作者
Quynh Nguyen [1 ]
机构
[1] MPI MIS, Leipzig, Germany
基金
欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We give a simple proof for the global convergence of gradient descent in training deep ReLU networks with the standard square loss, and show some of its improvements over the state-of-the-art. In particular, while prior works require all the hidden layers to be wide with width at least Omega (N-8) (N being the number of training samples), we require a single wide layer of linear, quadratic or cubic width depending on the type of initialization. Unlike many recent proofs based on the Neural Tangent Kernel (NTK), our proof need not track the evolution of the entire NTK matrix, or more generally, any quantities related to the changes of activation patterns during training Instead, we only need to track the evolution of the output at the last hidden layer, which can be done much more easily thanks to the Lipschitz property of ReLU. Some highlights of our setting: (i) all the layers are trained with standard gradient descent, (ii) the network has standard parameterization as opposed to the NTK one, and (iii) the network has a single wide layer as opposed to having all wide hidden layers as in most of NTK-related results.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes
    Kawaguchi, Kenji
    Huang, Jiaoyang
    2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2019, : 92 - 99
  • [42] Global Convergence of Block Coordinate Descent in Deep Learning
    Zeng, Jinshan
    Lau, Tim Tsz-Kit
    Lin, Shao-Bo
    Yao, Yuan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [43] A new descent memory gradient method and its global convergence
    Sun, Min
    Bai, Qingguo
    JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2011, 24 (04) : 784 - 794
  • [44] Some sufficient descent conjugate gradient methods and their global convergence
    Min Li
    Aiping Qu
    Computational and Applied Mathematics, 2014, 33 : 333 - 347
  • [45] Some sufficient descent conjugate gradient methods and their global convergence
    Li, Min
    Qu, Aiping
    COMPUTATIONAL & APPLIED MATHEMATICS, 2014, 33 (02): : 333 - 347
  • [46] On linear convergence of exponential sign-based gradient descent
    He, Kangchen
    Qu, Zhihai
    Li, Xiuxian
    Xu, Jia
    JOURNAL OF CONTROL AND DECISION, 2025,
  • [47] A NEW DESCENT MEMORY GRADIENT METHOD AND ITS GLOBAL CONVERGENCE
    Min SUN Department of Mathematics and Information Science
    JournalofSystemsScience&Complexity, 2011, 24 (04) : 784 - 794
  • [48] A sufficient descent conjugate gradient method and its global convergence
    Cheng, Yunlong
    Mou, Qiong
    Pan, Xianbing
    Yao, Shengwei
    OPTIMIZATION METHODS & SOFTWARE, 2016, 31 (03): : 577 - 590
  • [49] Convergence guarantees for forward gradient descent in the linear regression model
    Bos, Thijs
    Schmidt-Hieber, Johannes
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2024, 233
  • [50] A new descent memory gradient method and its global convergence
    Min Sun
    Qingguo Bai
    Journal of Systems Science and Complexity, 2011, 24 : 784 - 794