TRAINING DEEP SPIKING NEURAL NETWORKS FOR ENERGY-EFFICIENT NEUROMORPHIC COMPUTING

被引:0
|
作者
Srinivasan, Gopalakrishnan [1 ]
Lee, Chankyu [1 ]
Sengupta, Abhronil [2 ]
Panda, Priyadarshini [3 ]
Sarwar, Syed Shakib [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
[2] Penn State Univ, University Pk, PA 16802 USA
[3] Yale Univ, New Haven, CT 06520 USA
来源
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING | 2020年
基金
美国国家科学基金会;
关键词
SNN; Stochastic STDP; ANN-SNN conversion; Spike-based error backpropagation; Surrogate gradient backpropagation;
D O I
10.1109/icassp40776.2020.9053914
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Spiking Neural Networks (SNNs), widely known as the third generation of neural networks, encode input information temporally using sparse spiking events, which can be harnessed to achieve higher computational efficiency for cognitive tasks. However, considering the rapid strides in accuracy enabled by state-of-the-art Analog Neural Networks (ANNs), SNN training algorithms are much less mature, leading to accuracy gap between SNNs and ANNs. In this paper, we propose different SNN training methodologies, varying in degrees of biofidelity, and evaluate their efficacy on complex image recognition datasets. First, we present biologically plausible Spike Timing Dependent Plasticity (STDP) based deterministic and stochastic algorithms for unsupervised representation learning in SNNs. Our analysis on the CIFAR-10 dataset indicates that STDP-based learning rules enable the convolutional layers to self-learn low-level input features using fewer training examples. However, STDP-based learning is limited in applicability to shallow SNNs (<= 4 layers) while yielding considerably lower than state-of-the-art accuracy. In order to scale the SNNs deeper and improve the accuracy further, we propose conversion methodology to map off-the-shelf trained ANN to SNN for energy-efficient inference. We demonstrate 69.96% accuracy for VGG16-SNN on ImageNet. However, ANN-to-SNN conversion leads to high inference latency for achieving the best accuracy. In order to minimize the inference latency, we propose spike-based error backpropagation algorithm using differentiable approximation for the spiking neuron. Our preliminary experiments on CIFAR-10 show that spike-based error backpropagation effectively captures temporal statistics to reduce the inference latency by up to 8x compared to converted SNNs while yielding comparable accuracy.
引用
收藏
页码:8549 / 8553
页数:5
相关论文
共 50 条
  • [1] Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware
    Tang, Guangzhi
    Kumar, Neelesh
    Michmizos, Konstantinos P.
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6090 - 6097
  • [2] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Yongqiang Cao
    Yang Chen
    Deepak Khosla
    International Journal of Computer Vision, 2015, 113 : 54 - 66
  • [3] Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
    Cao, Yongqiang
    Chen, Yang
    Khosla, Deepak
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 113 (01) : 54 - 66
  • [4] Photonic Spiking Neural Networks with Highly Efficient Training Protocols for Ultrafast Neuromorphic Computing Systems
    Owen-Newns, Dafydd
    Robertson, Joshua
    Hejda, Matěj
    Hurtado, Antonio
    Intelligent Computing, 2023, 2
  • [5] Convolutional networks for fast, energy-efficient neuromorphic computing
    Esser, Steven K.
    Merolla, Paul A.
    Arthur, John V.
    Cassidy, Andrew S.
    Appuswamy, Rathinakumar
    Andreopoulos, Alexander
    Berg, David J.
    McKinstry, Jeffrey L.
    Melano, Timothy
    Barch, Davis R.
    di Nolfo, Carmelo
    Datta, Pallab
    Amir, Arnon
    Taba, Brian
    Flickner, Myron D.
    Modha, Dharmendra S.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (41) : 11441 - 11446
  • [6] Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding
    Datta, Gourav
    Kundu, Souvik
    Beerel, Peter A.
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Spiking Neural Network on Neuromorphic Hardware for Energy-Efficient Unidimensional SLAM
    Tang, Guangzhi
    Shah, Arpit
    Michmizos, Konstantinos P.
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4176 - 4181
  • [8] Neural Dynamics Pruning for Energy-Efficient Spiking Neural Networks
    Huang, Haoyu
    He, Linxuan
    Liu, Faqiang
    Zhao, Rong
    Shi, Luping
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
  • [9] Hardware Accelerators for Spiking Neural Networks for Energy-Efficient Edge Computing (Extended Abstract)
    Moitra, Abhishek
    Yin, Ruokai
    Panda, Priyadarshini
    PROCEEDINGS OF THE GREAT LAKES SYMPOSIUM ON VLSI 2023, GLSVLSI 2023, 2023, : 137 - 138
  • [10] BitSNNs: Revisiting Energy-Efficient Spiking Neural Networks
    Hu, Yangfan
    Zheng, Qian
    Pan, Gang
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (05) : 1736 - 1747