CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks

被引:2
|
作者
Saha, Saunak [1 ]
Duwe, Henry [1 ]
Zambreno, Joseph [1 ]
机构
[1] Iowa State Univ, Dept Elect & Comp Engn, Ames, IA 50011 USA
基金
美国国家科学基金会;
关键词
Neuromorphic; Spiking neural networks; Reconfigurable; Accelerator; Memory; Caching; Leakage; Energy efficiency; PROCESSOR; MODEL; ARCHITECTURE;
D O I
10.1007/s11265-020-01546-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While neural network models keep scaling in depth and computational requirements, biologically accurate models are becoming more interesting for low-cost inference. Coupled with the need to bring more computation to the edge in resource-constrained embedded and IoT devices, specialized ultra-low power accelerators for spiking neural networks are being developed. Having a large variance in the models employed in these networks, these accelerators need to be flexible, user-configurable, performant and energy efficient. In this paper, we describe CyNAPSE, a fully digital accelerator designed to emulate neural dynamics of diverse spiking networks. Since the use case of our implementation is primarily concerned with energy efficiency, we take a closer look at the factors that could improve its energy consumption. We observe that while majority of its dynamic power consumption can be credited to memory traffic, its on-chip components suffer greatly from static leakage. Given that the event-driven spike processing algorithm is naturally memory-intensive and has a large number of idle processing elements, it makes sense to tackle each of these problems towards a more efficient hardware implementation. With a diverse set of network benchmarks, we incorporate a detailed study of memory patterns that ultimately informs our choice of an application-specific network-adaptive memory management strategy to reduce dynamic power consumption of the chip. Subsequently, we also propose and evaluate a leakage mitigation strategy for runtime control of idle power. Using both the RTL implementation and a software simulation of CyNAPSE, we measure the relative benefits of these undertakings. Results show that our adaptive memory management policy results in up to 22% more reduction in dynamic power consumption compared to conventional policies. The runtime leakage mitigation techniques show that up to 99.92% and at least 14% savings in leakage energy consumption is achievable in CyNAPSE hardware modules.
引用
收藏
页码:907 / 929
页数:23
相关论文
共 50 条
  • [1] CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
    Saunak Saha
    Henry Duwe
    Joseph Zambreno
    Journal of Signal Processing Systems, 2020, 92 : 907 - 929
  • [2] Bayesian Inference Accelerator for Spiking Neural Networks
    Katti, Prabodh
    Nimbekar, Anagha
    Li, Chen
    Acharyya, Amit
    Al-Hashimi, Bashir M.
    Rajendran, Bipin
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [3] Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
    Diehl, Peter U.
    Zarrella, Guido
    Cassidy, Andrew
    Pedroni, Bruno U.
    Neftci, Emre
    2016 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2016,
  • [4] Effective Post-Training Quantization Of Neural Networks For Inference on Low Power Neural Accelerator
    Demidovskij, Alexander
    Smirnov, Eugene
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [5] FEDERATED NEUROMORPHIC LEARNING OF SPIKING NEURAL NETWORKS FOR LOW-POWER EDGE INTELLIGENCE
    Skatchkovsky, Nicolas
    Fang, Hyeryung
    Simeone, Osvaldo
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8524 - 8528
  • [6] Low-Power Real-Time Sequential Processing with Spiking Neural Networks
    Liyanagedera, Chamika Mihiranga
    Nagaraj, Manish
    Ponghiran, Wachirawit
    Roy, Kaushik
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [7] An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification
    Ju, Xiping
    Fang, Biao
    Yan, Rui
    Xu, Xiaoliang
    Tang, Huajin
    NEURAL COMPUTATION, 2020, 32 (01) : 182 - 204
  • [8] Reconfigurable Computation in Spiking Neural Networks
    Neves, Fabio Schittler
    Timme, Marc
    IEEE ACCESS, 2020, 8 : 179648 - 179655
  • [9] VSA: Reconfigurable Vectorwise Spiking Neural Network Accelerator
    Lien, Hong-Han
    Hsu, Chung-Wei
    Chang, Tian-Sheuan
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [10] A Low-Power Analog Cell for Implementing Spiking Neural Networks in 65 nm CMOS
    Venker, John S.
    Vincent, Luke
    Dix, Jeff
    JOURNAL OF LOW POWER ELECTRONICS AND APPLICATIONS, 2023, 13 (04)