SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs

被引:15
|
作者
Panchapakesan, Sathish [1 ]
Fang, Zhenman [1 ]
Li, Jian [2 ]
机构
[1] Simon Fraser Univ, 8888 Univ Dr, Burnaby, BC V5A 1S6, Canada
[2] Futurewei Technol Inc, 111 Speen St, Framingham, MA 01701 USA
基金
加拿大自然科学与工程研究理事会; 加拿大创新基金会;
关键词
Spiking neural network; deep learning; hardware acceleration; FPGA; synchronous execution; NEURONS; BACKPROPAGATION; RESUME;
D O I
10.1145/3514253
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Compared to conventional artificial neural networks, spiking neural networks (SNNs) are more biologically plausible and require less computation due to their event-driven nature of spiking neurons. However, the default asynchronous execution of SNNs also poses great challenges to accelerate their performance on FPGAs. In this work, we present a novel synchronous approach for rate-encoding-based SNNs, which is more hardware friendly than conventional asynchronous approaches. We first quantitatively evaluate and mathematically prove that the proposed synchronous approach and asynchronous implementation alternatives of rate-encoding-based SNNs are similar in terms of inference accuracy, and we highlight the computational performance advantage of using SyncNN over an asynchronous approach. We also design and implement the SyncNN framework to accelerate SNNs on Xilinx ARM-FPGA SoCs in a synchronous fashion. To improve the computation and memory access efficiency, we first quantize the network weights to 16-bit, 8-bit, and 4-bit fixed-point values with the SNN-friendly quantization techniques. Moreover, we encode only the activated neurons by recording their positions and corresponding number of spikes to fully utilize the event-driven characteristics of SNNs, instead of using the common binary encoding (i.e., 1 for a spike and 0 for no spike). For the encoded neurons that have dynamic and irregular access patterns, we design parameterized compute engines to accelerate their performance on the FPGA, where we explore various parallelization strategies and memory access optimizations. Our experimental results on multiple Xilinx ARM-FPGA SoC boards demonstrate that our SyncNN is scalable to run multiple networks, such as LeNet, Network in Network, and VGG, on various datasets such as MNIST, SVHN, and CIFAR-10. SyncNN not only achieves competitive accuracy (99.6%) but also achieves state-of-the-art performance (13,086 frames per second) for the MNIST dataset. Finally, we compare the performance of SyncNN with conventional CNNs using the Vitis AI and find that SyncNN can achieve similar accuracy and better performance compared to Vitis AI for image classification using small networks.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs
    Panchapakesan, Sathish
    Fang, Zhenman
    Li, Jian
    [J]. 2021 31ST INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS (FPL 2021), 2021, : 286 - 293
  • [2] Designing and Accelerating Spiking Neural Networks using OpenCL for FPGAs
    Podobas, Artur
    Matsuoka, Satoshi
    [J]. 2017 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (ICFPT), 2017, : 255 - 258
  • [3] Accelerating Sparse Deep Neural Networks on FPGAs
    Huang, Sitao
    Pearson, Carl
    Nagi, Rakesh
    Xiong, Jinjun
    Chen, Deming
    Hwu, Wen-mei
    [J]. 2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
  • [4] E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks With Emerging Neural Encoding on FPGAs
    Gerlinghoff, Daniel
    Wang, Zhehui
    Gu, Xiaozhe
    Goh, Rick Siow Mong
    Luo, Tao
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 3207 - 3219
  • [5] Accelerating Deep Neural Networks Using FPGAs and ZYNQ
    Lee, Han Sung
    Jeon, Jae Wook
    [J]. 2021 IEEE REGION 10 SYMPOSIUM (TENSYMP), 2021,
  • [6] SSF: Accelerating Training of Spiking Neural Networks with Stabilized Spiking Flow
    Wang, Jingtao
    Song, Zengjie
    Wang, Yuxi
    Xiao, Jun
    Yang, Yuran
    Mei, Shuqi
    Zhang, Zhaoxiang
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5959 - 5968
  • [7] TensorFlow to Cloud FPGAs: Tradeoffs for Accelerating Deep Neural Networks
    Hadjis, Stefan
    Olukotun, Kunle
    [J]. 2019 29TH INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS (FPL), 2019, : 360 - 366
  • [8] Challenges for large-scale implementations of spiking neural networks on FPGAs
    Maguire, L. P.
    McGinnity, T. M.
    Glackin, B.
    Ghani, A.
    Belatreche, A.
    Harkin, J.
    [J]. NEUROCOMPUTING, 2007, 71 (1-3) : 13 - 29
  • [9] Efficiency analysis of artificial vs. Spiking Neural Networks on FPGAs
    Li, Zhuoer
    Lemaire, Edgar
    Abderrahmane, Nassim
    Bilavarn, Sebastien
    Miramond, Benoit
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 133
  • [10] Accelerating Spiking Neural Networks using Memristive Crossbar Arrays
    Bohnstingl, Thomas
    Pantazi, Angeliki
    Eleftheriou, Evangelos
    [J]. 2020 27TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2020,