Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning

被引:17
|
作者
Chowdhury, Sayeed Shafayet [1 ]
Rathi, Nitin [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
来源
基金
美国国家科学基金会;
关键词
Spiking neural networks; Unit timestep; Energy efficiency; Temporal pruning; Reinforcement learning;
D O I
10.1007/978-3-031-20083-0_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs) can be energy efficient alternatives to commonly used deep neural networks (DNNs). However, computation over multiple timesteps increases latency and energy and incurs memory access overhead of membrane potentials. Hence, latency reduction is pivotal to obtain SNNs with high energy efficiency. But, reducing latency can have an adverse effect on accuracy. To optimize the accuracy-energy-latency trade-off, we propose a temporal pruning method which starts with an SNN of T timesteps, and reduces T every iteration of training, with threshold and leak as trainable parameters. This results in a continuum of SNNs from T timesteps, all the way up to unit timestep. Training SNNs directly with 1 timestep results in convergence failure due to layerwise spike vanishing and difficulty in finding optimum thresholds. The proposed temporal pruning overcomes this by enabling the learning of suitable layerwise thresholds with backpropagation by maintaining sufficient spiking activity. Using the proposed algorithm, we achieve top-1 accuracy of 93.05%, 70.15% and 69.00% on CIFAR-10, CIFAR-100 and ImageNet, respectively with VGG16, in just 1 timestep. Note, SNNs with leaky-integrate-and-fire (LIF) neurons behave as Recurrent Neural Networks (RNNs), with the membrane potential retaining information of previous inputs. The proposed SNNs also enable performing sequential tasks such as reinforcement learning on Cartpole and Atari pong environments using only 1 to 5 timesteps.
引用
收藏
页码:709 / 726
页数:18
相关论文
共 50 条
  • [21] Highway Connection for Low-Latency and High-Accuracy Spiking Neural Networks
    Zhang, Anguo
    Wu, Junyi
    Li, Xiumin
    Li, Hung Chun
    Gao, Yueming
    Pun, Sio Hang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (12) : 4579 - 4583
  • [22] Dynamic threshold integrate and fire neuron model for low latency spiking neural networks
    Wu, Xiyan
    Zhao, Yufei
    Song, Yong
    Jiang, Yurong
    Bai, Yashuo
    Li, Xinyi
    Zhou, Ya
    Yang, Xin
    Hao, Qun
    NEUROCOMPUTING, 2023, 544
  • [23] Low-Latency Spiking Neural Networks Using Pre-Charged Membrane Potential and Delayed Evaluation
    Hwang, Sungmin
    Chang, Jeesoo
    Oh, Min-Hye
    Min, Kyung Kyu
    Jang, Taejin
    Park, Kyungchul
    Yu, Junsu
    Lee, Jong-Ho
    Park, Byung-Gook
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [24] Low Latency Conversion of Artificial Neural Network Models to Rate-Encoded Spiking Neural Networks
    Yan, Zhanglu
    Tang, Kaiwen
    Zhou, Jun
    Wong, Weng-Fai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [25] Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes
    Hong, Chaofei
    Wei, Xile
    Wang, Jiang
    Deng, Bin
    Yu, Haitao
    Che, Yanqiu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (04) : 1285 - 1296
  • [26] Ultra-Low Energy LIF Neuron Using Si NIPIN Diode for Spiking Neural Networks
    Das, B.
    Schulze, J.
    Ganguly, U.
    IEEE ELECTRON DEVICE LETTERS, 2018, 39 (12) : 1832 - 1835
  • [27] RMPE:Reducing Residual Membrane Potential Error for Enabling High-Accuracy and Ultra-low-latency Spiking Neural Networks
    Chen, Yunhua
    Xiong, Zhimin
    Feng, Ren
    Chen, Pinghua
    Xiao, Jinsheng
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III, 2024, 14449 : 81 - 93
  • [28] Conversion of analog to spiking neural networks using sparse temporal coding
    Rueckauer, Bodo
    Liu, Shih-Chii
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [29] A Novel Approach to Robot Vision using a Hexagonal Grid and Spiking Neural Networks
    Kerr, D.
    Coleman, S. A.
    McGinnity, T. M.
    Wu, Q.
    Clogenson, M.
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [30] Low-Power Real-Time Sequential Processing with Spiking Neural Networks
    Liyanagedera, Chamika Mihiranga
    Nagaraj, Manish
    Ponghiran, Wachirawit
    Roy, Kaushik
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,