Enforcing balance allows local supervised learning in spiking recurrent networks

被引:0
|
作者
Bourdoukan, Ralph [1 ]
Deneve, Sophie [1 ]
机构
[1] ENS Paris, Grp Neural Theory, Rue Ulm 29, Paris, France
关键词
NEURON;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To predict sensory inputs or control motor trajectories, the brain must constantly learn temporal dynamics based on error feedback. However, it remains unclear how such supervised learning is implemented in biological neural networks. Learning in recurrent spiking networks is notoriously difficult because local changes in connectivity may have an unpredictable effect on the global dynamics. The most commonly used learning rules, such as temporal back-propagation, are not local and thus not biologically plausible. Furthermore, reproducing the Poisson-like statistics of neural responses requires the use of networks with balanced excitation and inhibition. Such balance is easily destroyed during learning. Using a top-down approach, we show how networks of integrate-and-fire neurons can learn arbitrary linear dynamical systems by feeding back their error as a feed-forward input. The network uses two types of recurrent connections: fast and slow. The fast connections learn to balance excitation and inhibition using a voltage-based plasticity rule. The slow connections are trained to minimize the error feedback using a current-based Hebbian learning rule. Importantly, the balance maintained by fast connections is crucial to ensure that global error signals are available locally in each neuron, in turn resulting in a local learning rule for the slow connections. This demonstrates that spiking networks can learn complex dynamics using purely local learning rules, using E/I balance as the key rather than an additional constraint. The resulting network implements a given function within the predictive coding scheme, with minimal dimensions and activity.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] A solution to the learning dilemma for recurrent networks of spiking neurons
    Bellec, Guillaume
    Scherr, Franz
    Subramoney, Anand
    Hajek, Elias
    Salaj, Darjan
    Legenstein, Robert
    Maass, Wolfgang
    NATURE COMMUNICATIONS, 2020, 11 (01)
  • [22] A solution to the learning dilemma for recurrent networks of spiking neurons
    Guillaume Bellec
    Franz Scherr
    Anand Subramoney
    Elias Hajek
    Darjan Salaj
    Robert Legenstein
    Wolfgang Maass
    Nature Communications, 11
  • [23] Supervised learning in spiking neurons
    Charlotte Le Mouel
    Pierre Yger
    KD Harris
    BMC Neuroscience, 14 (Suppl 1)
  • [24] An online supervised learning method for spiking neural networks with adaptive structure
    Wang, Jinling
    Belatreche, Ammar
    Maguire, Liam
    McGinnity, Thomas Martin
    NEUROCOMPUTING, 2014, 144 : 526 - 536
  • [25] An STDP-Based Supervised Learning Algorithm for Spiking Neural Networks
    Hu, Zhanhao
    Wang, Tao
    Hu, Xiaolin
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 92 - 100
  • [26] Supervised learning in Spiking Neural Networks with Limited Precision: SNN/LP
    Stromatias, Evangelos
    Marsland, John S.
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [27] Continuous learning of spiking networks trained with local rules
    Antonov, D.I.
    Sviatov, K.V.
    Sukhov, S.
    Neural Networks, 2022, 155 : 512 - 522
  • [28] Continuous learning of spiking networks trained with local rules
    Antonov, D. I.
    Sviatov, K. V.
    Sukhov, S.
    NEURAL NETWORKS, 2022, 155 : 512 - 522
  • [29] Initial Experiments Evolving Spiking Neural Networks with Supervised Learning Capability
    Schaffer, J. David
    COMPLEX ADAPTIVE SYSTEMS CONFERENCE WITH THEME: ENGINEERING CYBER PHYSICAL SYSTEMS, CAS, 2017, 114 : 184 - 191
  • [30] Training Spiking Neural Networks with Local Tandem Learning
    Yang, Qu
    Wu, Jibin
    Zhang, Malu
    Chua, Yansong
    Wang, Xinchao
    Li, Haizhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,