Advancing Spiking Neural Networks Toward Deep Residual Learning

被引:1
|
作者
Hu, Yifan [1 ,2 ]
Deng, Lei [1 ]
Wu, Yujie [3 ]
Yao, Man [2 ,4 ]
Li, Guoqi [2 ,4 ]
机构
[1] Tsinghua Univ, Ctr Brain Inspired Comp Res, Dept Precis Instrument, Beijing 100084, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Peoples R China
[3] Graz Univ Technol, Inst Theoret Comp Sci, A-8010 Graz, Austria
[4] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
基金
美国国家科学基金会;
关键词
Degradation problem; neuromorphic computing; residual neural network; spiking neural network (SNN); INTELLIGENCE; MODEL;
D O I
10.1109/TNNLS.2024.3355393
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the rapid progress of neuromorphic computing, inadequate capacity and insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice. Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks, but rarely did previous work assessed their applicability to the specifics of SNNs. In this article, we first identify that this negligence leads to impeded information flow and the accompanying degradation problem in a spiking version of vanilla ResNet. To address this issue, we propose a novel SNN-oriented residual architecture termed MS-ResNet, which establishes membrane-based shortcut pathways, and further proves that the gradient norm equality can be achieved in MS-ResNet by introducing block dynamical isometry theory, which ensures the network can be well-behaved in a depth-insensitive way. Thus, we are able to significantly extend the depth of directly trained SNNs, e.g., up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem. To validate the effectiveness of MS-ResNet, experiments on both frame-based and neuromorphic datasets are conducted. MS-ResNet104 achieves a superior result of 76.02% accuracy on ImageNet, which is the highest to the best of our knowledge in the domain of directly trained SNNs. Great energy efficiency is also observed, with an average of only one spike per neuron needed to classify an input sample. We believe our powerful and scalable models will provide strong support for further exploration of SNNs.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [41] Supervised Learning in Multilayer Spiking Neural Networks
    Sporea, Ioana
    Gruening, Andre
    [J]. NEURAL COMPUTATION, 2013, 25 (02) : 473 - 509
  • [42] EICIL: Joint Excitatory Inhibitory Cycle Iteration Learning for Deep Spiking Neural Networks
    Shao, Zihang
    Fang, Xuanye
    Li, Yaxin
    Feng, Chaoran
    Shen, Jiangrong
    Xu, Qi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] A Learning Framework for Controlling Spiking Neural Networks
    Narayanan, Vignesh
    Ritt, Jason T.
    Li, Jr-Shin
    Ching, ShiNung
    [J]. 2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 211 - 216
  • [44] A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks
    Wu, Jibin
    Chua, Yansong
    Zhang, Malu
    Li, Guoqi
    Li, Haizhou
    Tan, Kay Chen
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 446 - 460
  • [45] Learning in neural networks by reinforcement of irregular spiking
    Xie, XH
    Seung, HS
    [J]. PHYSICAL REVIEW E, 2004, 69 (04): : 10
  • [46] Toward robust and scalable deep spiking reinforcement learning
    Akl, Mahmoud
    Ergene, Deniz
    Walter, Florian
    Knoll, Alois
    [J]. FRONTIERS IN NEUROROBOTICS, 2023, 16
  • [47] Going Deeper in Spiking Neural Networks: VGG and Residual Architectures
    Sengupta, Abhronil
    Ye, Yuting
    Wang, Robert
    Liu, Chiao
    Roy, Kaushik
    [J]. FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [48] Deep Convolutional Spiking Neural Networks for Keyword Spotting
    Yilmaz, Emre
    Gevrek, Ozgur Bora
    Wu, Jibin
    Chen, Yuxiang
    Meng, Xuanbo
    Li, Haizhou
    [J]. INTERSPEECH 2020, 2020, : 2557 - 2561
  • [49] Rethinking the Role of Normalization and Residual Blocks for Spiking Neural Networks
    Ikegawa, Shin-ichi
    Saiin, Ryuji
    Sawada, Yoshihide
    Natori, Naotake
    [J]. SENSORS, 2022, 22 (08)
  • [50] Neuromorphic deep spiking neural networks for seizure detection
    Yang, Yikai
    Eshraghian, Jason K.
    Truong, Nhan Duy
    Nikpour, Armin
    Kavehei, Omid
    [J]. NEUROMORPHIC COMPUTING AND ENGINEERING, 2023, 3 (01):