Efficient learning in spiking neural networks

被引:0
|
作者
Rast, Alexander [1 ]
Aoun, Mario Antoine
Elia, Eleni G. [1 ]
Crook, Nigel [1 ]
机构
[1] Oxford Brookes Univ, Sch Engn Comp & Math, Wheatley Campus, Oxford OX33 1HX, England
关键词
Spiking; Adaptation; Energy efficiency; Plasticity; Neurodynamics; FIRE MODEL; POLYCHRONIZATION; CLASSIFICATION; COMPUTATION; ALGORITHM; NEURONS; SPIKES;
D O I
10.1016/j.neucom.2024.127962
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) are a large class of neural model distinct from 'classical' continuous -valued networks such as multilayer perceptrons (MLPs). With event -driven dynamics and a continuous -time model in contrast to the discrete -time model of their classical counterparts, they offer interesting advantages in representational capacity and energy consumption. Spiking networks may also be more biologically plausible, offering more insights into neuroscience. However, developing models of learning for SNNs has historically proven challenging: as discrete -time systems, their dynamics are much more complex and they cannot benefit from the strong theoretical developments in MLPs such as convergence proofs and optimal gradient descent. Nor do they gain automatically from algorithmic improvements that have produced efficient matrix inversion and batch training methods. Most of the existing research has focused on the most well -studied learning mechanism in SNNs, spike -timing -dependent plasticity (STDP), and although there has been progress, there are also notable pathologies that have often been solved with a variety of ad -hoc techniques. While efforts have been made to map SNNs to classical convolutional neural networks (CNNs), these have not yet shown any decisive efficiency advantage over conventional CNNs. More promising research directions lie in the realm of pure spiking learning models that exploit the inherent temporal dynamics (and often leverage recurrency). Metrics are needed; one possibility would be a measure of total energy cost per unit reduction in error. This tutorial overview looks at existing techniques for learning in SNNs and offers some thoughts for future directions.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Autonomous Learning Paradigm for Spiking Neural Networks
    Liu, Junxiu
    McDaid, Liam J.
    Harkin, Jim
    Karim, Shvan
    Johnson, Anju P.
    Halliday, David M.
    Tyrrell, Andy M.
    Timmis, Jon
    Millard, Alan G.
    Hilder, James
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: THEORETICAL NEURAL COMPUTATION, PT I, 2019, 11727 : 737 - 744
  • [22] Comparison of learning methods for spiking neural networks
    Kukin K.
    Sboev A.
    [J]. Optical Memory and Neural Networks (Information Optics), 2015, 24 (02): : 123 - 129
  • [23] Learning long sequences in spiking neural networks
    Stan, Matei-Ioan
    Rhodes, Oliver
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01):
  • [24] A reinforcement learning algorithm for spiking neural networks
    Florian, RV
    [J]. Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Proceedings, 2005, : 299 - 306
  • [25] Supervised Learning in Multilayer Spiking Neural Networks
    Sporea, Ioana
    Gruening, Andre
    [J]. NEURAL COMPUTATION, 2013, 25 (02) : 473 - 509
  • [26] Learning rules in spiking neural networks: A survey
    Yi, Zexiang
    Lian, Jing
    Liu, Qidong
    Zhu, Hegui
    Liang, Dong
    Liu, Jizhao
    [J]. NEUROCOMPUTING, 2023, 531 : 163 - 179
  • [27] Learning in neural networks by reinforcement of irregular spiking
    Xie, XH
    Seung, HS
    [J]. PHYSICAL REVIEW E, 2004, 69 (04): : 10
  • [28] Deep Residual Learning in Spiking Neural Networks
    Fang, Wei
    Yu, Zhaofei
    Chen, Yanqi
    Huang, Tiejun
    Masquelier, Timothee
    Tian, Yonghong
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [29] A Learning Framework for Controlling Spiking Neural Networks
    Narayanan, Vignesh
    Ritt, Jason T.
    Li, Jr-Shin
    Ching, ShiNung
    [J]. 2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 211 - 216
  • [30] Efficient and hardware-friendly methods to implement competitive learning for spiking neural networks
    Qu, Lianhua
    Zhao, Zhenyu
    Wang, Lei
    Wang, Yong
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (17): : 13479 - 13490