SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks

被引:0
|
作者
Liu, Fangxin [1 ,2 ]
Zhao, Wenbo [1 ,2 ]
Chen, Yongbiao [1 ]
Wang, Zongwu [1 ]
Jiang, Li [1 ,2 ,3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Shanghai Qi Zhi Inst, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, AI Inst, Key Lab Artificial Intelligence, MoE, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs) have recently attracted enormous research interest since their event-driven and brain-inspired structure enables low-power computation. In image recognition tasks, the best results achieved by SNN so far utilize ANN-SNN conversion methods that replace activation functions in artificial neural networks (ANNs) with integrate-and-fire neurons. Compared to source ANNs, converted SNNs usually suffer from accuracy loss and require a considerable number of time steps to achieve competitive accuracy. We find that the performance degradation of converted SNN stems from the fact that the information capacity of spike trains in transferred networks is smaller than that of activation values in source ANN, resulting in less information being passed during SNN inference. To better correlate ANN and SNN for better performance, we propose a conversion framework to mitigate the gap between the activation value of source ANN and the generated spike train of target SNN. The conversion framework originates from exploring an identical relation in the conversion and exploits temporal separation scheme and novel neuron model for the relation to hold. We demonstrate almost lossless ANN-SNN conversion using SpikeConverter for a wide variety of networks on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. Our results also show that SpikeConverter achieves the abovementioned accuracy across different network architectures and datasets using 32X - 512X fewer inference time-steps than state-of-the-art ANN-SNN conversion methods.
引用
收藏
页码:1692 / 1701
页数:10
相关论文
共 50 条
  • [21] Processing of Neural System Information with the Use of Artificial Spiking Neural Networks
    Lisovskaya, Angelina
    Skripnik, Tatiana N.
    [J]. PROCEEDINGS OF THE 2019 IEEE CONFERENCE OF RUSSIAN YOUNG RESEARCHERS IN ELECTRICAL AND ELECTRONIC ENGINEERING (EICONRUS), 2019, : 1183 - 1186
  • [22] SPIKING NEURAL NETWORKS
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    [J]. INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2009, 19 (04) : 295 - 308
  • [23] Third Generation Neural Networks: Spiking Neural Networks
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, 2009, 61 : 167 - +
  • [24] Converting Artificial Neural Networks to Ultralow-Latency Spiking Neural Networks for Action Recognition
    You, Hong
    Zhong, Xian
    Liu, Wenxuan
    Wei, Qi
    Huang, Wenxin
    Yu, Zhaofei
    Huang, Tiejun
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (04) : 1533 - 1545
  • [25] \Evolving Spiking Neural Networks for Control of Artificial Creatures
    Ahmadi, Arash
    [J]. BRAIN-BROAD RESEARCH IN ARTIFICIAL INTELLIGENCE AND NEUROSCIENCE, 2013, 4 (1-4): : 5 - 19
  • [26] Comparison of Artificial and Spiking Neural Networks on Digital Hardware
    Davidson, Simon
    Furber, Steve B.
    [J]. FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [27] Artificial grammar recognition using spiking neural networks
    Philip Cavaco
    Baran Çürüklü
    Karl Magnus Petersson
    [J]. BMC Neuroscience, 10 (Suppl 1)
  • [28] Bridging the Gap between Spiking Neural Networks & LSTMs for Latency & Energy Efficiency
    Datta, Gourav
    Deng, Haoqin
    Aviles, Robert
    Liu, Zeyu
    Beerel, Peter A.
    [J]. 2023 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED, 2023,
  • [29] FIST: A Framework to Interleave Spiking neural networks on CGRAs
    Tuan Ngyen
    Jafri, Syed M. A. H.
    Daneshtalab, Masoud
    Hemani, Ahmed
    Dytckov, Sergei
    Plosila, Juha
    Tenhunen, Hannu
    [J]. 23RD EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, AND NETWORK-BASED PROCESSING (PDP 2015), 2015, : 751 - 758
  • [30] GenericSNN: A Framework for Easy Development of Spiking Neural Networks
    Martin-Martin, Alberto
    Verona-Almeida, Marta
    Padial-Allue, Ruben
    Mendez, Javier
    Castillo, Encarnacion
    Parrilla, Luis
    [J]. IEEE ACCESS, 2024, 12 : 57504 - 57518