High-performance deep spiking neural networks via at-most-two-spike exponential coding

被引:0
|
作者
Chen, Yunhua [1 ]
Feng, Ren [1 ]
Xiong, Zhimin [1 ]
Xiao, Jinsheng [2 ]
Liu, Jian K. [3 ]
机构
[1] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Guangdong, Peoples R China
[2] Wuhan Univ, Sch Elect Informat, Wuhan, Peoples R China
[3] Univ Birmingham, Sch Comp Sci, Birmingham, England
关键词
Deep spiking neural networks; ANN-SNN conversion; Time-based coding;
D O I
10.1016/j.neunet.2024.106346
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) provide necessary models and algorithms for neuromorphic computing. A popular way of building high-performance deep SNNs is to convert ANNs to SNNs, taking advantage of advanced and well -trained ANNs. Here we propose an ANN to SNN conversion methodology that uses a time -based coding scheme, named At -most -two -spike Exponential Coding (AEC), and a corresponding AEC spiking neuron model for ANN-SNN conversion. AEC neurons employ quantization -compensating spikes to improve coding accuracy and capacity, with each neuron generating up to two spikes within the time window. Two exponential decay functions with tunable parameters are proposed to represent the dynamic encoding thresholds, based on which pixel intensities are encoded into spike times and spike times are decoded into pixel intensities. The hyper -parameters of AEC neurons are fine-tuned based on the loss function of SNN-decoded values and ANN -activation values. In addition, we design two regularization terms for the number of spikes, providing the possibility to achieve the best trade-off between accuracy, latency and power consumption. The experimental results show that, compared to other similar methods, the proposed scheme not only obtains deep SNNs with higher accuracy, but also has more significant advantages in terms of energy efficiency and inference latency. More details can be found at https://github.com/RPDS2020/AEC.git.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Time Series Forecasting via Derivative Spike Encoding and Bespoke Loss Functions for Spiking Neural Networks
    Manna, Davide Liberato
    Vicente-Sola, Alex
    Kirkland, Paul
    Bihl, Trevor Joseph
    Di Caterina, Gaetano
    COMPUTERS, 2024, 13 (08)
  • [42] Efficient and Compact Representations of Deep Neural Networks via Entropy Coding
    Marino, Giosue Cataldo
    Furia, Flavio
    Malchiodi, Dario
    Frasca, Marco
    IEEE ACCESS, 2023, 11 : 106103 - 106125
  • [43] Spiking Neural Networks With Time-to-First-Spike Coding Using TFT-Type Synaptic Device Model
    Oh, Seongbin
    Lee, Soochang
    Woo, Sung Yun
    Kwon, Dongseok
    Im, Jiseong
    Hwang, Joon
    Bae, Jong-Ho
    Park, Byung-Gook
    Lee, Jong-Ho
    IEEE ACCESS, 2021, 9 : 78098 - 78107
  • [44] First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures
    Liu, Siying
    Leung, Vincent C. H.
    Dragotti, Pier Luigi
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [45] Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding
    Datta, Gourav
    Kundu, Souvik
    Beerel, Peter A.
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [46] High-Performance Scaphoid Fracture Recognition via Effectiveness Assessment of Artificial Neural Networks
    Tung, Yu-Cheng
    Su, Ja-Hwung
    Liao, Yi-Wen
    Chang, Ching-Di
    Cheng, Yu-Fan
    Chang, Wan-Ching
    Chen, Bo-Hong
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [47] High-Performance Spiking Neural Net Accelerators for Embedded Computer Vision Applications
    Kim, Jung Kuk
    Knag, Phil
    Chen, Thomas
    Liu, Chester
    Lee, Ching-En
    Zhang, Zhengya
    2017 IEEE SOI-3D-SUBTHRESHOLD MICROELECTRONICS TECHNOLOGY UNIFIED CONFERENCE (S3S), 2017,
  • [48] An Improved STBP for Training High-Accuracy and Low-Spike-Count Spiking Neural Networks
    Tan, Pai-Yu
    Wu, Cheng-Wen
    Lu, Juin-Ming
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 575 - 580
  • [49] High-performance, deep neural networks with sub-microsecond latency on FPGAs for trigger applications
    Nottbeck, Noel
    Schmitt, Christian
    Buescher, Volker
    19TH INTERNATIONAL WORKSHOP ON ADVANCED COMPUTING AND ANALYSIS TECHNIQUES IN PHYSICS RESEARCH, 2020, 1525
  • [50] SWIRL: High-performance many-core CPU code generation for deep neural networks
    Venkat, Anand
    Rusira, Tharindu
    Barik, Raj
    Hall, Mary
    Truong, Leonard
    INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2019, 33 (06): : 1275 - 1289