An Adaptive Learning Rate Schedule for SIGNSGD Optimizer in Neural Networks

被引:0
|
作者
Kang Wang
Tao Sun
Yong Dou
机构
[1] National University of Defense Technology,The National Laboratory for Parallel and Distributed Processing, School of Computer
来源
Neural Processing Letters | 2022年 / 54卷
关键词
SIGNSGD optimizer; An adaptive learning rate strategy; Communication; Fast convergence; Neural networks;
D O I
暂无
中图分类号
学科分类号
摘要
SIGNSGD is able to dramatically improve the performance of training large neural networks by transmitting the sign of each minibatch stochastic gradient, which achieves gradient communication compression and keeps standard stochastic gradient descent (SGD) level convergence rate. Meanwhile, the learning rate plays a vital role in training neural networks, but existing learning rate optimization strategies mainly face the following problems: (1) for learning rate decay method, small learning rates produced lead to converge slowly, and extra hyper-parameters are required except for the initial learning rate, causing more human participation. (2) Adaptive gradient algorithms have poor generalization performance and also utilize other hyper-parameters. (3) Generating learning rates via two-level optimization models is difficult and time-consuming in training. To this end, we propose a novel adaptive learning rate schedule for neural network training via SIGNSGD optimizer for the first time. In our method, based on the theoretical inspiration that the convergence rate’s upper bound has minimization with the current learning rate in each iteration, the current learning rate can be expressed by a mathematical expression that is merely related to historical learning rates. Then, given an initial value, learning rates in different training stages can be adaptively obtained. Our proposed method has following advantages: (1) it is a novel automatic method without additional hyper-parameters except for one initial value, thus reducing the manual participation. (2) It has faster convergence rate and outperforms the standard SGD. (3) It makes neural networks achieve better performance with fewer gradient communication bits. Three numerical simulations are conducted on different neural networks with three public datasets: MNIST, Cifar-10 and Cifar-100 datasets, and several numerical results are presented to demonstrate the efficiency of our proposed approach.
引用
收藏
页码:803 / 816
页数:13
相关论文
共 50 条
  • [41] Fast Learning in Spiking Neural Networks by Learning Rate Adaptation
    Fang Huijuan
    Luo Jiliang
    Wang Fei
    CHINESE JOURNAL OF CHEMICAL ENGINEERING, 2012, 20 (06) : 1219 - 1224
  • [42] Backpropagation Neural Network with Adaptive Learning Rate for Classification
    Jullapak, Rujira
    Thammano, Arit
    ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022, 2023, 153 : 493 - 499
  • [43] Learning with adaptive layer activation in spiking neural networks
    Zuters, Janis
    DATABASES AND INFORMATION SYSTEMS, 2008, : 117 - 128
  • [44] A review of adaptive online learning for artificial neural networks
    Beatriz Pérez-Sánchez
    Oscar Fontenla-Romero
    Bertha Guijarro-Berdiñas
    Artificial Intelligence Review, 2018, 49 : 281 - 299
  • [45] DiscoverNet: Adaptive learning environment for designing neural networks
    Belkada, S
    Cristea, AI
    Okamoto, T
    IWALT 2000: INTERNATIONAL WORKSHOP ON ADVANCED LEARNING TECHNOLOGIES: ADVANCED LEARNING TECHNOLOGY: DESIGN AND DEVELOPMENT ISSUES, 2000, : 155 - 156
  • [46] Automatic learning using neural networks and adaptive regression
    Pham, DT
    Peat, BJ
    MEASUREMENT & CONTROL, 1999, 32 (09): : 270 - 274
  • [47] Adaptive LRBP using learning automata for neural networks
    Mashoufi, B.
    Menhaj, Mohammad B.
    Motamedi, Sayed A.
    Meybodi, Mohammad R.
    Advances in Neural Networks and Applications, 2001, : 280 - 286
  • [48] Adaptive learning of rational expectations using neural networks
    Heinemann, M
    JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2000, 24 (5-7): : 1007 - 1026
  • [49] A review of adaptive online learning for artificial neural networks
    Perez-Sanchez, Beatriz
    Fontenla-Romero, Oscar
    Guijarro-Berdinas, Bertha
    ARTIFICIAL INTELLIGENCE REVIEW, 2018, 49 (02) : 281 - 299
  • [50] Learning to Generate Questions with Adaptive Copying Neural Networks
    Lu, Xinyuan
    SIGMOD '19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2019, : 1838 - 1840