The interchangeability of learning rate and gain in backpropagation neural networks

被引:47
|
作者
Thimm, G
Moerland, P
Fiesler, E
机构
[1] IDIAP
关键词
D O I
10.1162/neco.1996.8.2.451
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.
引用
收藏
页码:451 / 460
页数:10
相关论文
共 50 条
  • [21] Improving the Performance of Multilayer Backpropagation Neural Networks with Adaptive Leaning Rate
    Amiri, Zahra
    Hassanpour, Hamid
    Khan, N. Mamode
    Khan, M. Heenaye Mamode
    2018 INTERNATIONAL CONFERENCE ON ADVANCES IN BIG DATA, COMPUTING AND DATA COMMUNICATION SYSTEMS (ICABCD), 2018,
  • [22] BACKPROPAGATION NEURAL NETWORKS - A TUTORIAL
    WYTHOFF, BJ
    CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 1993, 18 (02) : 115 - 155
  • [23] An individual adaptive gain parameter backpropagation algorithm for complex-valued neural networks
    Li, Songsong
    Okada, Toshimi
    Chen, Xiaoming
    Tang, Zheng
    ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1, 2006, 3971 : 551 - 557
  • [24] BPSpike II: A New Backpropagation Learning Algorithm for Spiking Neural Networks
    Matsuda, Satoshi
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 56 - 65
  • [25] Experimentally realized in situ backpropagation for deep learning in photonic neural networks
    Pai, Sunil
    Sun, Zhanghao
    Hughes, Tyler W.
    Park, Taewon
    Bartlett, Ben
    Williamson, Ian A. D.
    Minkov, Momchil
    Milanizadeh, Maziyar
    Abebe, Nathnael
    Morichetti, Francesco
    Melloni, Andrea
    Fan, Shanhui
    Solgaard, Olav
    Miller, David A. B.
    SCIENCE, 2023, 380 (6643) : 398 - 403
  • [26] Augmented Efficient BackProp for Backpropagation Learning in Deep Autoassociative Neural Networks
    Embrechts, Mark J.
    Hargis, Blake J.
    Linton, Jonathan D.
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [27] A remark on the error-backpropagation learning algorithm for spiking neural networks
    Yang, Jie
    Yang, Wenyu
    Wu, Wei
    APPLIED MATHEMATICS LETTERS, 2012, 25 (08) : 1118 - 1120
  • [28] Active training of backpropagation neural networks using the learning by experimentation methodology
    Fu-Ren Lin
    Michael J. Shaw
    Annals of Operations Research, 1997, 75 : 105 - 122
  • [29] Complex-Valued Feedforward Neural Networks Learning Without Backpropagation
    Guo, Wei
    Huang, He
    Huang, Tingwen
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 100 - 107
  • [30] A new backpropagation learning algorithm for layered neural networks with nondifferentiable units
    Oohori, Takahumi
    Naganuma, Hidenori
    Watanabe, Kazuhisa
    NEURAL COMPUTATION, 2007, 19 (05) : 1422 - 1435