Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation

被引:1
|
作者
Eberle, Simon [1 ]
Jentzen, Arnulf [2 ,3 ,4 ]
Riekert, Adrian [4 ]
Weiss, Georg S. [5 ]
机构
[1] Basque Ctr Appl Math, Bilbao, Spain
[2] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen Res Inst Big Data, Shenzhen, Peoples R China
[4] Univ Munster, Appl Math Inst Anal & Numer, Munster, Germany
[5] Univ Duisburg Essen, Fac Math, AG Anal Partial Differential Equat, Essen, Germany
来源
ELECTRONIC RESEARCH ARCHIVE | 2023年 / 31卷 / 05期
关键词
deep learning; artificial intelligence; optimization; gradient flow; Kurdyka-Lojasiewicz inequalities; DESCENT METHODS;
D O I
10.3934/era.2023128
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The training of artificial neural networks (ANNs) with rectified linear unit (ReLU) activa-tion via gradient descent (GD) type optimization schemes is nowadays a common industrially relevant procedure. GD type optimization schemes can be regarded as temporal discretization methods for the gradient flow (GF) differential equations associated to the considered optimization problem and, in view of this, it seems to be a natural direction of research to first aim to develop a mathematical con-vergence theory for time-continuous GF differential equations and, thereafter, to aim to extend such a time-continuous convergence theory to implementable time-discrete GD type optimization methods. In this article we establish two basic results for GF differential equations in the training of fully-connected feedforward ANNs with one hidden layer and ReLU activation. In the first main result of this article we establish in the training of such ANNs under the assumption that the probability distribution of the input data of the considered supervised learning problem is absolutely continuous with a bounded density function that every GF differential equation admits for every initial value a solution which is also unique among a suitable class of solutions. In the second main result of this article we prove in the training of such ANNs under the assumption that the target function and the density function of the probability distribution of the input data are piecewise polynomial that every non-divergent GF trajectory converges with an appropriate rate of convergence to a critical point and that the risk of the non-divergent GF trajectory converges with rate 1 to the risk of the critical point. We establish this result by proving that the considered risk function is semialgebraic and, consequently, satisfies the Kurdyka-Lojasiewicz inequality, which allows us to show convergence of every non-divergent GF trajectory.
引用
收藏
页码:2519 / 2554
页数:36
相关论文
共 50 条
  • [41] A convergence analysis of Nesterov's accelerated gradient method in training deep linear neural networks
    Liu, Xin
    Tao, Wei
    Pan, Zhisong
    INFORMATION SCIENCES, 2022, 612 : 898 - 925
  • [42] Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks
    Huisheng Zhang
    Ying Zhang
    Dongpo Xu
    Xiaodong Liu
    Cognitive Neurodynamics, 2015, 9 : 331 - 340
  • [43] Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks
    Zhang, Huisheng
    Zhang, Ying
    Xu, Dongpo
    Liu, Xiaodong
    COGNITIVE NEURODYNAMICS, 2015, 9 (03) : 331 - 340
  • [44] Training Optimization for Artificial Neural Networks
    Toribio Luna, Primitivo
    Alejo Eleuterio, Roberto
    Valdovinos Rosas, Rosa Maria
    Rodriguez Mendez, Benjamin Gonzalo
    CIENCIA ERGO-SUM, 2010, 17 (03) : 313 - 317
  • [45] Strong convergence of gradient methods for BP networks training
    Wu, W
    Shao, HM
    Qu, D
    PROCEEDINGS OF THE 2005 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS AND BRAIN, VOLS 1-3, 2005, : 332 - 334
  • [46] DISCUSSION OF: "NONPARAMETRIC REGRESSION USING DEEP NEURAL NETWORKS WITH RELU ACTIVATION FUNCTION"
    Shamir, Ohad
    ANNALS OF STATISTICS, 2020, 48 (04): : 1911 - 1915
  • [47] GRADIENT FLOWS FOR SEMICONVEX FUNCTIONS ON METRIC MEASURE SPACES - EXISTENCE, UNIQUENESS, AND LIPSCHITZ CONTINUITY
    Sturm, Karl-Theodor
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY, 2018, 146 (09) : 3985 - 3994
  • [48] Convergence of Adversarial Training in Overparametrized Neural Networks
    Gao, Ruiqi
    Cai, Tianle
    Li, Haochuan
    Wang, Liwei
    Hsieh, Cho-Jui
    Lee, Jason D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [49] Interactive gradient algorithm for artificial neural networks
    Li, JY
    Luo, SW
    Qi, YJ
    Liu, JQ
    Huang, YP
    6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL VI, PROCEEDINGS: INDUSTRIAL SYSTEMS AND ENGINEERING I, 2002, : 87 - 90
  • [50] On the Convergence Rate of Training Recurrent Neural Networks
    Allen-Zhu, Zeyuan
    Li, Yuanzhi
    Song, Zhao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32