Efficient Spiking Neural Networks With Radix Encoding

被引:5
|
作者
Wang, Zhehui [1 ]
Gu, Xiaozhe [2 ]
Goh, Rick Siow Mong [1 ]
Zhou, Joey Tianyi [1 ]
Luo, Tao [1 ]
机构
[1] ASTAR, Inst High Performance Comp, Singapore 138632, Singapore
[2] Chinese Univ Hong Kong, Future Network Intelligence Inst FNii, Shenzhen 518172, Peoples R China
关键词
Encoding; energy efficient; short spike train; speedup; spiking neural network (SNN); PROCESSOR; DRIVEN;
D O I
10.1109/TNNLS.2022.3195918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) have advantages in latency and energy efficiency over traditional artificial neural networks (ANNs) due to their event-driven computation mechanism and the replacement of energy-consuming weight multiplication with addition. However, to achieve high accuracy, it usually requires long spike trains to ensure accuracy, usually more than 1000 time steps. This offsets the computation efficiency brought by SNNs because a longer spike train means a larger number of operations and larger latency. In this article, we propose a radix-encoded SNN, which has ultrashort spike trains. Specifically, it is able to use less than six time steps to achieve even higher accuracy than its traditional counterpart. We also develop a method to fit our radix encoding technique into the ANN-to-SNN conversion approach so that we can train radix-encoded SNNs more efficiently on mature platforms and hardware. Experiments show that our radix encoding can achieve 25x improvement in latency and 1.7% improvement in accuracy compared to the state-of-the-art method using the VGG-16 network on the CIFAR-10 dataset.
引用
收藏
页码:3689 / 3701
页数:13
相关论文
共 50 条
  • [31] An efficient automated parameter tuning framework for spiking neural networks
    Carlson, Kristofor D.
    Nageswaran, Jayram Moorkanikara
    Dutt, Nikil
    Krichmar, Jeffrey L.
    FRONTIERS IN NEUROSCIENCE, 2014, 8
  • [32] Efficient Processing of Spiking Neural Networks via Task Specialization
    Abu Lebdeh, Muath
    Yildirim, Kasim Sinan
    Brunelli, Davide
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, : 1 - 11
  • [33] AutoSNN: Towards Energy-Efficient Spiking Neural Networks
    Na, Byunggook
    Mok, Jisoo
    Park, Seongsik
    Lee, Dongjin
    Choe, Hyeokjun
    Yoon, Sungroh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [34] An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
    Xie, Xiurui
    Qu, Hong
    Liu, Guisong
    Zhang, Malu
    Kurths, Juergen
    PLOS ONE, 2016, 11 (04):
  • [35] SPIDEN: deep Spiking Neural Networks for efficient image denoising
    Castagnetti, Andrea
    Pegatoquet, Alain
    Miramond, Benoit
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [36] Hardware Efficient Weight-Binarized Spiking Neural Networks
    Tang, Chengcheng
    Han, Jie
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [37] Efficient Modelling of Spiking Neural networks on a Scalable Chip Multiprocessor
    Jin, Xin
    Furber, Steve B.
    Woods, John V.
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2812 - 2819
  • [38] Efficient Deployment of Spiking Neural Networks on SpiNNaker Neuromorphic Platform
    Galanis, Ioannis
    Anagnostopoulos, Iraklis
    Nguyen, Chinh
    Bares, Guillermo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (06) : 1937 - 1941
  • [39] Layered tile architecture for efficient hardware spiking neural networks
    Wan, Lei
    Liu, Junxiu
    Harkin, Jim
    McDaid, Liam
    Luo, Yuling
    MICROPROCESSORS AND MICROSYSTEMS, 2017, 53 : 21 - 32
  • [40] Efficient asynchronous federated neuromorphic learning of spiking neural networks
    Wang, Yuan
    Duan, Shukai
    Chen, Feng
    NEUROCOMPUTING, 2023, 557