Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks

被引:0
|
作者
Karakida, Ryo [1 ]
Osawa, Kazuki [2 ]
机构
[1] Artificial Intelligence Res Ctr AIST, Tokyo, Japan
[2] Tokyo Inst Technol, Dept Comp Sci, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Natural Gradient Descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher information are identical to those with the exact Fisher information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
    An, Jing
    Lu, Jianfeng
    [J]. arXiv, 2023,
  • [32] Impact of Mathematical Norms on Convergence of Gradient Descent Algorithms for Deep Neural Networks Learning
    Cai, Linzhe
    Yu, Xinghuo
    Li, Chaojie
    Eberhard, Andrew
    Lien Thuy Nguyen
    Chuong Thai Doan
    [J]. AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 131 - 144
  • [33] Natural Gradient Descent for Training Stochastic Complex-Valued Neural Networks
    Nitta, Tohru
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2014, 5 (07) : 193 - 198
  • [34] Nonlinear system identification using neural networks trained with natural gradient descent
    Ibnkahla, M
    [J]. EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING, 2003, 2003 (12) : 1229 - 1237
  • [35] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Bai-cun Zhou
    Cong-ying Han
    Tian-de Guo
    [J]. Acta Mathematicae Applicatae Sinica, English Series, 2021, 37 : 126 - 136
  • [36] Nonlinear System Identification Using Neural Networks Trained with Natural Gradient Descent
    Mohamed Ibnkahla
    [J]. EURASIP Journal on Advances in Signal Processing, 2003
  • [37] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Zhou, Bai-cun
    Han, Cong-ying
    Guo, Tian-de
    [J]. ACTA MATHEMATICAE APPLICATAE SINICA-ENGLISH SERIES, 2021, 37 (01): : 126 - 136
  • [38] Nonlinear system identification using neural networks trained with natural gradient descent
    [J]. Ibnkahla, M. (mohamed.ibnkahla@ece.queensu.ca), 1600, Hindawi Publishing Corporation (2003):
  • [39] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Bai-cun ZHOU
    Cong-ying HAN
    Tian-de GUO
    [J]. Acta Mathematicae Applicatae Sinica, 2021, 37 (01) : 126 - 136
  • [40] Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
    Lee, Jaehoon
    Xiao, Lechao
    Schoenholz, Samuel S.
    Bahri, Yasaman
    Novak, Roman
    Sohl-Dickstein, Jascha
    Pennington, Jeffrey
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32