Noise-Boosted Backpropagation Learning of Feedforward Threshold Neural Networks for Function Approximation

被引:9
|
作者
Duan, Lingling [1 ,2 ]
Duan, Fabing [1 ]
Chapeau-Blondeau, Francois [3 ]
Abbott, Derek [4 ]
机构
[1] Qingdao Univ, Inst Complex Sci, Qingdao 266071, Peoples R China
[2] Jining Univ, Dept Math, Jining 273155, Peoples R China
[3] Univ Angers, Lab Angevin Rech Ingn Syst LARIS, F-49000 Angers, France
[4] Univ Adelaide, Sch Elect & Elect Engn, Ctr Biomed Engn CBME, Adelaide, SA 5005, Australia
基金
澳大利亚研究理事会;
关键词
Function approximation; noise injection; noise-boosted backpropagation; optimal noise; stochastic resonance; threshold neural network; STOCHASTIC RESONANCE; INJECTION;
D O I
10.1109/TIM.2021.3121502
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Aiming to ensure the feasibility of the backpropagation training of feedforward threshold neural networks, each hidden unit layer is designed to be composed of a sufficiently large number of hard-limiting activation functions that are excited by mutually independent external noise components and the weighted inputs simultaneously. The application of noise to nondifferentiable activation functions enables a proper definition of the gradients, and the injected noise is treated as a network parameter that can be adaptively updated by a stochastic gradient descent learning rule. This noise-boosted backpropagation learning process is found to converge to a nonzero optimized level of noise, indicating that the injected noise is beneficial both for the learning and for the ensuing retrieval phase. For minimizing the total error energy of the function approximation in the designed threshold neural network, the proposed noise-boosted backpropagation learning method is proven to be better than directly injecting noise into network inputs or weight coefficients. The Lipschitz continuous property of the noise-smoothed activation function in the hidden unit layer is demonstrated to guarantee the local convergence of the learning process. Beyond the Gaussian injected noise, the optimal noise type is also numerically solved for training the designed threshold neural network. Test experiments for approximating nonlinear functions and real-world datasets verify the feasibility of this noise-boosted backpropagation algorithm in the threshold neural network. These results not only extend the analysis of the beneficial effects of noise similar to stochastic resonance and exploited here to the universal approximation capabilities of threshold neural networks, but also allow backpropagation training of neural networks with a much wider family of nondifferentiable activation functions.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Noise-boosted bidirectional backpropagation and adversarial learning
    Adigun, Olaoluwa
    Kosko, Bart
    [J]. NEURAL NETWORKS, 2019, 120 : 9 - 31
  • [2] Noise-boosted recurrent backpropagation
    Adigun, Olaoluwa
    Kosko, Bart
    [J]. NEUROCOMPUTING, 2023, 559
  • [3] A general backpropagation algorithm for feedforward neural networks learning
    Yu, XH
    Efe, MO
    Kaynak, O
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (01): : 251 - 254
  • [4] Parallel Learning of Feedforward Neural Networks Without Error Backpropagation
    Bilski, Jaroslaw
    Wilamowski, Bogdan M.
    [J]. ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2016, 2016, 9692 : 57 - 69
  • [5] Learning polynomial feedforward neural networks by genetic programming and backpropagation
    Nikolaev, NY
    Iba, H
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (02): : 337 - 350
  • [6] A backpropagation learning algorithm with graph regularization for feedforward neural networks
    Fan, Yetian
    Yang, Wenyu
    [J]. INFORMATION SCIENCES, 2022, 607 : 263 - 277
  • [7] Complex-Valued Feedforward Neural Networks Learning Without Backpropagation
    Guo, Wei
    Huang, He
    Huang, Tingwen
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 100 - 107
  • [8] Research on function approximation capabilities of fuzzy backpropagation neural networks
    Wang, S.-T.
    Zhu, X.-M.
    [J]. Huadong Chuanbo Gongye Xueyuan Xuebao/Journal of East China Shipbuilding Institute, 2001, 15 (04): : 44 - 48
  • [9] A new learning algorithm for function approximation by incorporating a priori information into feedforward neural networks
    Han, Fei
    Ling, Qing-Hua
    [J]. ICNC 2007: THIRD INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, VOL 1, PROCEEDINGS, 2007, : 29 - +
  • [10] On learning feedforward neural networks with noise injection into inputs
    Seghouane, AK
    Moudden, Y
    Fleury, G
    [J]. NEURAL NETWORKS FOR SIGNAL PROCESSING XII, PROCEEDINGS, 2002, : 149 - 158