Memory-Efficient Reversible Spiking Neural Networks

被引:0
|
作者
Zhang, Hong [1 ]
Zhang, Yu [1 ,2 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou, Peoples R China
[2] Key Lab Collaborat Sensing & Autonomous Unmanned, Hangzhou, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs) due to their high energy-efficiency on neuromorphic hardware. However, SNNs are unfolded over simulation time steps during the training process. Thus, SNNs require much more memory than ANNs, which impedes the training of deeper SNN models. In this paper, we propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training. Firstly, we extend the reversible architecture along temporal dimension and propose the reversible spiking block, which can reconstruct the computational graph and recompute all intermediate variables in forward pass with a reverse process. On this basis, we adopt the state-of-the-art SNN models to the reversible variants, namely reversible spiking ResNet (RevSResNet) and reversible spiking transformer (RevSFormer). Through experiments on static and neuromorphic datasets, we demonstrate that the memory cost per image of our reversible SNNs does not increase with the network depth. On CIFAR10 and CIFAR100 datasets, our RevSResNet37 and RevSFormer-4-384 achieve comparable accuracies and consume 3.79x and 3.00x lower GPU memory per image than their counterparts with roughly identical model complexity and parameters. We believe that this work can unleash the memory constraints in SNN training and pave the way for training extremely large and deep SNNs.
引用
收藏
页码:16759 / 16767
页数:9
相关论文
共 50 条
  • [1] FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks
    Putra, Rachmad Vidya Wicaksana
    Shafique, Muhammad
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 3601 - 3613
  • [2] Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
    Kim, Youngeun
    Li, Yuhang
    Moitra, Abhishek
    Yin, Ruokai
    Panda, Priyadarshini
    [J]. FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [3] Memory-Efficient Backpropagation for Recurrent Neural Networks
    Ayoub, Issa
    Al Osman, Hussein
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 274 - 283
  • [4] Memory-Efficient Training of Binarized Neural Networks on the Edge
    Yayla, Mikail
    Chen, Jian-Jia
    [J]. PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 661 - 666
  • [5] Adaptive Weight Compression for Memory-Efficient Neural Networks
    Ko, Jong Hwan
    Kim, Duckhwan
    Na, Taesik
    Kung, Jaeha
    Mukhopadhyay, Saibal
    [J]. PROCEEDINGS OF THE 2017 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2017, : 199 - 204
  • [6] Accelerating Recurrent Neural Networks: A Memory-Efficient Approach
    Wang, Zhisheng
    Lin, Jun
    Wang, Zhongfeng
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (10) : 2763 - 2775
  • [7] Towards Memory-Efficient Processing-in-Memory Architecture for Convolutional Neural Networks
    Wang, Yi
    Zhang, Mingxu
    Yang, Jing
    [J]. ACM SIGPLAN NOTICES, 2017, 52 (05) : 81 - 90
  • [8] ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing
    Srinivasan, Gopalakrishnan
    Roy, Kaushik
    [J]. FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [9] vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design
    Rhu, Minsoo
    Gimelshein, Natalia
    Clemons, Jason
    Zulfiqar, Arslan
    Keckler, Stephen W.
    [J]. 2016 49TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2016,
  • [10] A Non-deterministic Training Approach for Memory-Efficient Stochastic Neural Networks
    Golbabaei, Babak
    Zhu, Guangxian
    Kan, Yirong
    Zhang, Renyuan
    Nakashima, Yasuhiko
    [J]. 2023 IEEE 36TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE, SOCC, 2023, : 232 - 237