On the Convergence Rate of Training Recurrent Neural Networks

被引:0
|
作者
Allen-Zhu, Zeyuan [1 ]
Li, Yuanzhi [2 ]
Song, Zhao [3 ]
机构
[1] Microsoft Res AI, Redmond, WA 98052 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] UT Austin, Austin, TX USA
关键词
MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
How can local-search methods such as stochastic gradient descent (SGD) avoid bad local minima in training multi-layer neural networks? Why can they fit random labels even given non-convex and non-smooth architectures? Most existing theory only covers networks with one hidden layer, so can we go deeper? In this paper, we focus on recurrent neural networks (RNNs) which are multi-layer networks widely used in natural language processing. They are harder to analyze than feedforward neural networks, because the same recurrent unit is repeatedly applied across the entire time horizon of length L, which is analogous to feedforward networks of depth L. We show when the number of neurons is sufficiently large, meaning polynomial in the training data size and in L, then SGD is capable of minimizing the regression loss in the linear convergence rate. This gives theoretical evidence of how RNNs can memorize data. More importantly, in this paper we build general toolkits to analyze multi-layer networks with ReLU activations. For instance, we prove why ReLU activations can prevent exponential gradient explosion or vanishing, and build a perturbation theory to analyze first-order approximation of multi-layer networks.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Convergence of Gradient Descent Algorithm for Diagonal Recurrent Neural Networks
    Xu, Dongpo
    Li, Zhengxue
    Wu, Wei
    Ding, Xiaoshuai
    Qu, Di
    [J]. 2007 SECOND INTERNATIONAL CONFERENCE ON BIO-INSPIRED COMPUTING: THEORIES AND APPLICATIONS, 2007, : 29 - 31
  • [22] Global output convergence of recurrent neural networks with distributed delays
    Liang, Jinling
    Cao, Jinde
    [J]. NONLINEAR ANALYSIS-REAL WORLD APPLICATIONS, 2007, 8 (01) : 187 - 197
  • [23] Global exponential convergence of recurrent neural networks with variable delays
    Yi, Z
    [J]. THEORETICAL COMPUTER SCIENCE, 2004, 312 (2-3) : 281 - 293
  • [24] Global convergence rate of recurrently connected neural networks
    Chen, TP
    Lu, WL
    Amari, S
    [J]. NEURAL COMPUTATION, 2002, 14 (12) : 2947 - 2957
  • [25] REGULARIZED NEURAL NETWORKS - SOME CONVERGENCE RATE RESULTS
    CORRADI, V
    WHITE, H
    [J]. NEURAL COMPUTATION, 1995, 7 (06) : 1225 - 1244
  • [26] Prescribed convergence analysis of recurrent neural networks with parameter variations
    Bao, Gang
    Zeng, Zhigang
    [J]. MATHEMATICS AND COMPUTERS IN SIMULATION, 2021, 182 : 858 - 870
  • [27] Rate of convergence in density estimation using neural networks
    Modha, DS
    Masry, E
    [J]. NEURAL COMPUTATION, 1996, 8 (05) : 1107 - 1122
  • [28] Convergence and Rate Analysis of Neural Networks for Sparse Approximation
    Balavoine, Aurele
    Romberg, Justin
    Rozell, Christopher J.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (09) : 1377 - 1389
  • [29] Convergence rate of Artificial Neural Networks for estimation in software
    Rankovic, Dragica
    Rankovic, Nevena
    Ivanovic, Mirjana
    Lazic, Ljubomir
    [J]. INFORMATION AND SOFTWARE TECHNOLOGY, 2021, 138
  • [30] Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks
    Wang, Mingze
    Ma, Chao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,