Fast and robust learning in Spiking Feed-forward Neural Networks based on Intrinsic Plasticity mechanism

被引:27
|
作者
Zhang, Anguo [1 ,2 ]
Zhou, Hongjun [3 ]
Li, Xiumin [1 ]
Zhu, Wei [2 ]
机构
[1] Chongqing Univ, Coll Automat, Chongqing 400044, Peoples R China
[2] Ruijie Networks Co Lid, Res Inst Ruijie, Fuzhou 350002, Fujian, Peoples R China
[3] Chongqing Univ, Sch Econ & Business Adm, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Intrinsic Plasticity; Feed-forward Neural Network; Spiking neuron model; Fast and robust learning; ERROR-BACKPROPAGATION; NORMALIZATION;
D O I
10.1016/j.neucom.2019.07.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, the computational performance of a Spiking Feed-forward Neural Network (SFNN) is investigated based on a brain-inspired Intrinsic Plasticity (IP) mechanism, which is a membrane potential adaptive tuning scheme used to change the intrinsic excitability of individual neuron. This learning rule has the ability of regulating neural activity in a relative homeostatic level even if the external input of a neuron is extremely low or extremely high. The effectiveness of IP on SFNN model has been studied and evaluated through the MNIST handwritten digits classification. The training of network weights starts from a conventional artificial neural network by backpropagation and then the rate-based neurons are transformed into spiking neuron models with IP learning. Our results show that both over-activation and under-activation of neuronal response which commonly exist during the computation of neural networks can be effectively avoided. Without loss of accuracy, the calculation speed of SFNN with IP learning is extremely higher than that of the other models. Besides, when the input intensity and data noise are taken into account, both of the learning speed and accuracy of the model can be greatly improved by the application of IP learning. This biologically inspired SFNN model is simple and effective which may give insights for the optimization of neural computation. (C) 2019 Published by Elsevier B.V.
引用
收藏
页码:102 / 112
页数:11
相关论文
共 50 条
  • [1] Manipulation of attractors in feed-forward autoassociative neural networks for robust learning
    Amini, Nima
    Seyyedsalehi, Seyyed Ali
    2017 25TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE), 2017, : 29 - 33
  • [2] Hybrid learning schemes for fast training of feed-forward neural networks
    Karayiannis, NB
    MATHEMATICS AND COMPUTERS IN SIMULATION, 1996, 41 (1-2) : 13 - 28
  • [4] Feed-forward neural networks
    Bebis, George
    Georgiopoulos, Michael
    IEEE Potentials, 1994, 13 (04): : 27 - 31
  • [5] Optimizing and Learning Algorithm for Feed-forward Neural Networks
    Bachiller, Pilar
    González, Julia
    Journal of Advanced Computational Intelligence and Intelligent Informatics, 2001, 5 (01) : 51 - 57
  • [6] Emotion recognition based on multimodal physiological signals using spiking feed-forward neural networks
    Yang, Xudong
    Yan, Hongli
    Zhang, Anguo
    Xu, Pan
    Pan, Sio Hang
    Vai, Mang I.
    Gao, Yueming
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 91
  • [7] Categorization and effective perceptron learning in feed-forward neural networks
    Waelbroeck, H
    Zertuche, F
    JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 2000, 33 (33): : 5809 - 5818
  • [8] DESIGN OF ROBUST COMPLEX-VALUED FEED-FORWARD NEURAL NETWORKS
    Neacsu, Ana
    Ciubotaru, Razvan
    Pesquet, Jean-Christophe
    Burileanu, Corneliu
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 1596 - 1600
  • [9] A New Fast Learning Algorithm with Promising Global Convergence Capability for Feed-Forward Neural Networks
    Cheung, Chi-Chung
    Ng, Sin-Chun
    Lui, Andrew K.
    Xu, Sean Shensheng
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [10] Fast learning without synaptic plasticity in spiking neural networks
    Subramoney, Anand
    Bellec, Guillaume
    Scherr, Franz
    Legenstein, Robert
    Maass, Wolfgang
    SCIENTIFIC REPORTS, 2024, 14 (01)