Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks

被引:603
|
作者
Neftci, Emre O. [1 ]
Mostafa, Hesham [2 ]
Zenke, Friedemann [3 ]
机构
[1] Univ Calif Irvine, Dept Cognit Sci & Comp Sci, Irvine, CA 92697 USA
[2] Intels Artificial Intelligence Prod Grp, Off CTO, Santa Clara, CA USA
[3] Friedrich Miescher Inst Biomed Res, Basel, Switzerland
基金
英国惠康基金; 瑞士国家科学基金会; 美国国家科学基金会;
关键词
Neural networks; Fault tolerance; Energy efficiency; Biological system modeling; ALGORITHM;
D O I
10.1109/MSP.2019.2931595
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Spiking neural networks (SNNs) are nature's versatile solution to fault-tolerant, energy-efficient signal processing. To translate these benefits into hardware, a growing number of neuromorphic spiking NN processors have attempted to emulate biological NNs. These developments have created an imminent need for methods and tools that enable such systems to solve real-world signal processing problems. Like conventional NNs, SNNs can be trained on real, domain-specific data; however, their training requires the overcoming of a number of challenges linked to their binary and dynamical nature. This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting. Accordingly, it gives an overview of existing approaches and provides an introduction to surrogate gradient (SG) methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.
引用
收藏
页码:51 / 63
页数:13
相关论文
共 50 条
  • [1] GRADUAL SURROGATE GRADIENT LEARNING IN DEEP SPIKING NEURAL NETWORKS
    Chen, Yi
    Zhang, Silin
    Ren, Shiyu
    Qu, Hong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8927 - 8931
  • [2] Meta-learning spiking neural networks with surrogate gradient descent
    Stewart, Kenneth M.
    Neftci, Emre O.
    [J]. NEUROMORPHIC COMPUTING AND ENGINEERING, 2022, 2 (04):
  • [3] Relaxation LIF: A gradient-based spiking neuron for direct training deep spiking neural networks
    Tang, Jianxiong
    Lai, Jian-Huang
    Zheng, Wei-Shi
    Yang, Lingxiao
    Xie, Xiaohua
    [J]. NEUROCOMPUTING, 2022, 501 : 499 - 513
  • [4] Differentiable hierarchical and surrogate gradient search for spiking neural networks
    Che, Kaiwei
    Leng, Luziwei
    Zhang, Kaixuan
    Zhang, Jianguo
    Meng, Max Q. -H.
    Cheng, Jie
    Guo, Qinghai
    Liao, Jiangxing
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Surrogate gradient scaling for directly training spiking neural networks
    Chen, Tao
    Wang, Shu
    Gong, Yu
    Wang, Lidan
    Duan, Shukai
    [J]. APPLIED INTELLIGENCE, 2023, 53 (23) : 27966 - 27981
  • [6] Surrogate gradient scaling for directly training spiking neural networks
    Tao Chen
    Shu Wang
    Yu Gong
    Lidan Wang
    Shukai Duan
    [J]. Applied Intelligence, 2023, 53 : 27966 - 27981
  • [7] Learnable Surrogate Gradient for Direct Training Spiking Neural Networks
    Lian, Shuang
    Shen, Jiangrong
    Liu, Qianhui
    Wang, Ziming
    Yan, Rui
    Tang, Huajin
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 3002 - 3010
  • [8] Gradient Descent for Spiking Neural Networks
    Huh, Dongsung
    Sejnowski, Terrence J.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [9] Gradient-based feature-attribution explainability methods for spiking neural networks
    Bitar, Ammar
    Rosales, Rafael
    Paulitsch, Michael
    [J]. FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [10] Surrogate Module Learning: Reduce the Gradient Error Accumulation in Training Spiking Neural Networks
    Deng, Shikuang
    Lin, Hao
    Li, Yuhang
    Gu, Shi
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202