Enforcing balance allows local supervised learning in spiking recurrent networks

被引:0
|
作者
Bourdoukan, Ralph [1 ]
Deneve, Sophie [1 ]
机构
[1] ENS Paris, Grp Neural Theory, Rue Ulm 29, Paris, France
关键词
NEURON;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To predict sensory inputs or control motor trajectories, the brain must constantly learn temporal dynamics based on error feedback. However, it remains unclear how such supervised learning is implemented in biological neural networks. Learning in recurrent spiking networks is notoriously difficult because local changes in connectivity may have an unpredictable effect on the global dynamics. The most commonly used learning rules, such as temporal back-propagation, are not local and thus not biologically plausible. Furthermore, reproducing the Poisson-like statistics of neural responses requires the use of networks with balanced excitation and inhibition. Such balance is easily destroyed during learning. Using a top-down approach, we show how networks of integrate-and-fire neurons can learn arbitrary linear dynamical systems by feeding back their error as a feed-forward input. The network uses two types of recurrent connections: fast and slow. The fast connections learn to balance excitation and inhibition using a voltage-based plasticity rule. The slow connections are trained to minimize the error feedback using a current-based Hebbian learning rule. Importantly, the balance maintained by fast connections is crucial to ensure that global error signals are available locally in each neuron, in turn resulting in a local learning rule for the slow connections. This demonstrates that spiking networks can learn complex dynamics using purely local learning rules, using E/I balance as the key rather than an additional constraint. The resulting network implements a given function within the predictive coding scheme, with minimal dimensions and activity.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] A Supervised Learning Rule for Recurrent Spiking Neural Networks with Weighted Spikes
    Shi, Guoyong
    Liang, Jungang
    Cui, Yong
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 522 - 527
  • [2] Supervised learning with spiking neural networks
    Xin, JG
    Embrechts, MJ
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 1772 - 1777
  • [3] A Supervised Multi-spike Learning Algorithm for Recurrent Spiking Neural Networks
    Lin, Xianghong
    Shi, Guoyong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 222 - 234
  • [4] Learning recurrent dynamics in spiking networks
    Kim, Christopher M.
    Chow, Carson C.
    ELIFE, 2018, 7
  • [5] Supervised Learning in Multilayer Spiking Neural Networks
    Sporea, Ioana
    Gruening, Andre
    NEURAL COMPUTATION, 2013, 25 (02) : 473 - 509
  • [6] Local dendritic balance enables learning of efficient representations in networks of spiking neurons
    Mikulasch, Fabian A.
    Rudelt, Lucas
    Priesemann, Viola
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (50)
  • [7] Paired competing neurons improving STDP supervised local learning in Spiking Neural Networks
    Goupy, Gaspard
    Tirilly, Pierre
    Bilasco, Ioan Marius
    FRONTIERS IN NEUROSCIENCE, 2024, 18
  • [8] A supervised learning algorithm based on spike train inner products for recurrent spiking neural networks
    Lin, Xianghong
    Pi, Xiaomei
    Wang, Xiangwen
    INTERNATIONAL JOURNAL OF COMPUTING SCIENCE AND MATHEMATICS, 2023, 17 (04) : 309 - 319
  • [9] Supervised learning in spiking neural networks with FORCE training
    Wilten Nicola
    Claudia Clopath
    Nature Communications, 8
  • [10] Stochastic variational learning in recurrent spiking networks
    Rezende, Danilo Jimenez
    Gerstner, Wulfram
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2014, 8