Supervised Learning Strategy for Spiking Neurons Based on Their Segmental Running Characteristics

被引:0
|
作者
Gu, Xingjian [1 ]
Shu, Xin [1 ]
Yang, Jing [2 ]
Xu, Yan [1 ]
Jiang, Haiyan [1 ]
Shu, Xiangbo [3 ]
机构
[1] Nanjing Agr Univ, Coll Artificial Intelligence, Nanjing 210095, Jiangsu, Peoples R China
[2] Beijing Normal Univ, Sch Management, Zhuhai Campus, Zhuhai 519087, Guangdong, Peoples R China
[3] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Spiking neurons; Spike sequence learning; Fully supervised learning; Running time segments; NEURAL-NETWORKS; GRADIENT DESCENT; ALGORITHM; INFORMATION; PRECISION; RESUME;
D O I
10.1007/s11063-023-11348-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Supervised learning of spiking neurons is an effective simulation method to explore the learning mechanism of real neurons. Desired output spike trains are often used as supervised signals to control the synaptic strength adjustment of neurons for precise emission. The goal of supervised learning is also to allow spiking neurons to enter the desired running and firing state. The running process of a spiking neuron is a continuous process, but because of absolute refractory periods, it is regarded as several running segments. Based on the segmental characteristic, a new supervised learning strategy for spiking neurons is proposed to expand the action mode of supervised signals in supervised learning. Desired output spikes are used to actively regulate the running segments and make them more efficient in achieving the desired running and firing state. Supervised signals actively regulate the running process of neurons and are more comprehensively involved in the learning process than simply participating in adjusting synaptic weights. Based on two weight adjustment mechanisms of spiking neurons, two new specific supervised learning methods are proposed. The experimental results obtained using various settings indicate that the two learning methods have higher learning performance, which indicates the effectiveness of the new learning strategy.
引用
下载
收藏
页码:10747 / 10772
页数:26
相关论文
共 50 条
  • [31] Supervised Associative Learning in Spiking Neural Network
    Yusoff, Nooraini
    Gruening, Andre
    ARTIFICIAL NEURAL NETWORKS-ICANN 2010, PT I, 2010, 6352 : 224 - 229
  • [32] Learning mechanisms in networks of spiking neurons
    Wu, QingXiang
    McGinnity, Martin
    Maguire, Liam
    Glackin, Brendan
    Belatreche, Ammar
    TRENDS IN NEURAL COMPUTATION, 2007, 35 : 171 - +
  • [33] Online Learning in Bayesian Spiking Neurons
    Kuhlmann, Levin
    Hauser-Raspe, Michael
    Manton, Jonathan
    Grayden, David B.
    Tapson, Jonathan
    van Schaik, Andre
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [34] Reinforcement learning in populations of spiking neurons
    Robert Urbanczik
    Walter Senn
    Nature Neuroscience, 2009, 12 : 250 - 252
  • [35] Reinforcement learning in populations of spiking neurons
    Urbanczik, Robert
    Senn, Walter
    NATURE NEUROSCIENCE, 2009, 12 (03) : 250 - 252
  • [36] Bayesian spiking neurons II: Learning
    Deneve, Sophie
    NEURAL COMPUTATION, 2008, 20 (01) : 118 - 145
  • [37] The maximum points-based supervised learning rule for spiking neural networks
    Xie, Xiurui
    Liu, Guisong
    Cai, Qing
    Qu, Hong
    Zhang, Malu
    SOFT COMPUTING, 2019, 23 (20) : 10187 - 10198
  • [38] Dynamics of spiking map-based neural networks in problems of supervised learning
    Pugavko, Mechislav M.
    Maslennikov, Oleg, V
    Nekorkin, Vladimir, I
    COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION, 2020, 90 (90):
  • [39] First Error-Based Supervised Learning Algorithm for Spiking Neural Networks
    Luo, Xiaoling
    Qu, Hong
    Zhang, Yun
    Chen, Yi
    FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [40] The maximum points-based supervised learning rule for spiking neural networks
    Xiurui Xie
    Guisong Liu
    Qing Cai
    Hong Qu
    Malu Zhang
    Soft Computing, 2019, 23 : 10187 - 10198