Approximate Computing for Spiking Neural Networks

被引:0
|
作者
Sen, Sanchari [1 ]
Venkataramani, Swagath [1 ,2 ]
Raghunathan, Anand [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY USA
基金
美国国家科学基金会;
关键词
Approximate Computing; Spiking Neural Networks; Approximate Neural Networks;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Spiking Neural Networks (SNNs) are widely regarded as the third generation of artificial neural networks, and are expected to drive new classes of recognition, data analytics and computer vision applications. However, large-scale SNNs (e.g., of the scale of the human visual cortex) are highly compute and data intensive, requiring new approaches to improve their efficiency. Complementary to prior efforts that focus on parallel software and the design of specialized hardware, we propose AxSNN, the first effort to apply approximate computing to improve the computational efficiency of evaluating SNNs. In SNNs, the inputs and outputs of neurons are encoded as a time series of spikes. A spike at a neuron's output triggers updates to the potentials (internal states) of neurons to which it is connected. AxSNN determines spike-triggered neuron updates that can be skipped with little or no impact on output quality and selectively skips them to improve both compute and memory energy. Neurons that can be approximated are identified by utilizing various static and dynamic parameters such as the average spiking rates and current potentials of neurons, and the weights of synaptic connections. Such a neuron is placed into one of many approximation modes, wherein the neuron is sensitive only to a subset of its inputs and sends spikes only to a subset of its outputs. A controller periodically updates the approximation modes of neurons in the network to achieve energy savings with minimal loss in quality. We apply AxSNN to both hardware and software implementations of SNNs. For hardware evaluation, we designed SNNAP, a Spiking Neural Network Approximate Processor that embodies the proposed approximation strategy, and synthesized it to 45nm technology. The software implementation of AxSNN was evaluated on a 2.7 GHz Intel Xeon server with 128 GB memory. Across a suite of 6 image recognition benchmarks, AxSNN, achieves 1.4-5.5x reduction in scalar operations for network evaluation, which translates to 1.2-3.62x and 1.26-3.9x improvement in hardware and software energies respectively, for no loss in application quality. Progressively higher energy savings are achieved with modest reductions in output quality.
引用
收藏
页码:193 / 198
页数:6
相关论文
共 50 条
  • [21] Spiking recurrent neural networks for neuromorphic computing in nonlinear structural mechanics
    Tandale, Saurabh Balkrishna
    Stoffel, Marcus
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2023, 412
  • [22] SPIKING NEURAL NETWORKS
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2009, 19 (04) : 295 - 308
  • [23] Approximate arithmetic aware training for stochastic computing neural networks
    Frasser, Christiam F.
    Moran, Alejandro
    Canals, Vincent
    Font, Joan
    Isern, Eugeni
    Roca, Miquel
    Rossello, Josep L.
    2023 38TH CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS, DCIS, 2023,
  • [24] Exploiting Approximate Computing for Efficient and Reliable Convolutional Neural Networks
    Bosio, Alberto
    Deveautour, Bastien
    O'Connor, Ian
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 326 - 326
  • [25] DEOXYS: Defensive Approximate Computing for Secure Graph Neural Networks
    Su, Haoran
    Wu, Nan
    2024 IEEE 35TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS, ASAP 2024, 2024, : 54 - 60
  • [26] Semiconductor lasers for photonic neuromorphic computing and photonic spiking neural networks: A perspective
    Xiang, Shuiying
    Han, Yanan
    Gao, Shuang
    Song, Ziwei
    Zhang, Yahui
    Zheng, Dianzhuang
    Yu, Chengyang
    Guo, Xingxing
    Zeng, Xintao
    Huang, Zhiquan
    Hao, Yue
    APL PHOTONICS, 2024, 9 (07)
  • [27] TRAINING DEEP SPIKING NEURAL NETWORKS FOR ENERGY-EFFICIENT NEUROMORPHIC COMPUTING
    Srinivasan, Gopalakrishnan
    Lee, Chankyu
    Sengupta, Abhronil
    Panda, Priyadarshini
    Sarwar, Syed Shakib
    Roy, Kaushik
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8549 - 8553
  • [28] Spiking Neural Membrane Computing Models
    Liu, Xiyu
    Ren, Qianqian
    PROCESSES, 2021, 9 (05)
  • [29] Third Generation Neural Networks: Spiking Neural Networks
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, 2009, 61 : 167 - +
  • [30] Approximate Computing for Long Short Term Memory (LSTM) Neural Networks
    Sen, Sanchari
    Raghunathan, Anand
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 37 (11) : 2266 - 2276