CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks

被引:2
|
作者
Saha, Saunak [1 ]
Duwe, Henry [1 ]
Zambreno, Joseph [1 ]
机构
[1] Iowa State Univ, Dept Elect & Comp Engn, Ames, IA 50011 USA
基金
美国国家科学基金会;
关键词
Neuromorphic; Spiking neural networks; Reconfigurable; Accelerator; Memory; Caching; Leakage; Energy efficiency; PROCESSOR; MODEL; ARCHITECTURE;
D O I
10.1007/s11265-020-01546-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While neural network models keep scaling in depth and computational requirements, biologically accurate models are becoming more interesting for low-cost inference. Coupled with the need to bring more computation to the edge in resource-constrained embedded and IoT devices, specialized ultra-low power accelerators for spiking neural networks are being developed. Having a large variance in the models employed in these networks, these accelerators need to be flexible, user-configurable, performant and energy efficient. In this paper, we describe CyNAPSE, a fully digital accelerator designed to emulate neural dynamics of diverse spiking networks. Since the use case of our implementation is primarily concerned with energy efficiency, we take a closer look at the factors that could improve its energy consumption. We observe that while majority of its dynamic power consumption can be credited to memory traffic, its on-chip components suffer greatly from static leakage. Given that the event-driven spike processing algorithm is naturally memory-intensive and has a large number of idle processing elements, it makes sense to tackle each of these problems towards a more efficient hardware implementation. With a diverse set of network benchmarks, we incorporate a detailed study of memory patterns that ultimately informs our choice of an application-specific network-adaptive memory management strategy to reduce dynamic power consumption of the chip. Subsequently, we also propose and evaluate a leakage mitigation strategy for runtime control of idle power. Using both the RTL implementation and a software simulation of CyNAPSE, we measure the relative benefits of these undertakings. Results show that our adaptive memory management policy results in up to 22% more reduction in dynamic power consumption compared to conventional policies. The runtime leakage mitigation techniques show that up to 99.92% and at least 14% savings in leakage energy consumption is achievable in CyNAPSE hardware modules.
引用
收藏
页码:907 / 929
页数:23
相关论文
共 50 条
  • [31] Dynamic Action Inference with Recurrent Spiking Neural Networks
    Traub, Manuel
    Butz, Martin, V
    Legenstein, Robert
    Otte, Sebastian
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2021, PT V, 2021, 12895 : 233 - 244
  • [32] Hierarchical Bayesian Inference and Learning in Spiking Neural Networks
    Guo, Shangqi
    Yu, Zhaofei
    Deng, Fei
    Hu, Xiaolin
    Chen, Feng
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (01) : 133 - 145
  • [33] Spiking Neural Network Based Low-Power Radioisotope Identification using FPGA
    Huang, Xiaoyu
    Jones, Edward
    Zhang, Siru
    Xie, Shouyu
    Furber, Steve
    Goulermas, Yannis
    Marsden, Edward
    Baistow, Ian
    Mitra, Srinjoy
    Hamilton, Alister
    2020 27TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2020,
  • [34] Exploiting Neural-Network Statistics for Low-Power DNN Inference
    Bamberg, Lennart
    Najafi, Ardalan
    Garcia-Ortiz, Alberto
    IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, 2024, 5 : 178 - 188
  • [35] A Reconfigurable Streaming Processor for Real-Time Low-Power Execution of Convolutional Neural Networks at the Edge
    Sanchez, Justin
    Soltani, Nasim
    Kulkarni, Pratik
    Chamarthi, Ramachandra Vikas
    Tabkhi, Hamed
    EDGE COMPUTING - EDGE 2018, 2018, 10973 : 49 - 64
  • [36] Implementation of artificial neural networks on a reconfigurable hardware accelerator
    Porrmann, M
    Witkowski, U
    Kalte, H
    Rückert, U
    10TH EUROMICRO WORKSHOP ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING, PROCEEDINGS, 2002, : 243 - 250
  • [37] DRGN: a dynamically reconfigurable accelerator for graph neural networks
    Yang C.
    Huo K.-B.
    Geng L.-F.
    Mei K.-Z.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (07) : 8985 - 9000
  • [38] An Efficient Reconfigurable Hardware Accelerator for Convolutional Neural Networks
    Ansari, Anaam
    Gunnam, Kiran
    Ogunfunmi, Tokunbo
    2017 FIFTY-FIRST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2017, : 1337 - 1341
  • [39] A Low-Power Hardware Architecture for On-Line Supervised Learning in Multi-Layer Spiking Neural Networks
    Zheng, Nan
    Mazumder, Pinaki
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [40] High-speed, low-power, and configurable on-chip training acceleration platform for spiking neural networks
    Liu, Yijun
    Xu, Yujie
    Ye, Wujian
    Cui, Youfeng
    Zhang, Boning
    Lin, Wenjie
    APPLIED INTELLIGENCE, 2024, 54 (20) : 9655 - 9670