Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function

被引:1
|
作者
Shymyrbay, Ayan [1 ]
Fouda, Mohammed E. [2 ]
Eltawil, Ahmed [1 ]
机构
[1] King Abdullah Univ Sci & Technol, CEMSE Div, Dept ECE, Thuwal, Saudi Arabia
[2] Univ Calif Irvine, Ctr Embedded & Cyber Phys Syst, Irvine, CA 92697 USA
关键词
spiking neural networks; memory compression; quantization; binarization; edge computing;
D O I
10.1109/IJCNN54540.2023.10191387
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have been proven to be highly effective tools in various domains, yet their computational and memory costs restrict them from being widely deployed on portable devices. The recent rapid increase of edge computing devices has led to an active search for techniques to address the above-mentioned limitations of machine learning frameworks. The quantization of artificial neural networks (ANNs), which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. At the same time, spiking neural networks (SNNs) have become an attractive alternative to conventional ANNs due to their temporal information processing capability, energy efficiency, and high biological plausibility. Despite being driven by the same motivation, the simultaneous utilization of both concepts has yet to be thoroughly studied. Therefore, this work aims to bridge the gap between recent progress in quantized neural networks and SNNs. It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions, exploited in low-bit weight quantization in SNNs. The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks, CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST, for binary networks (64.05%, 95.45%, 68.71%, and 99.43% respectively) with small accuracy drops and up to 31x memory savings, which outperforms existing methods.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Quantization-aware training for low precision photonic neural networks
    Kirtas, M.
    Oikonomou, A.
    Passalis, N.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    [J]. NEURAL NETWORKS, 2022, 155 : 561 - 573
  • [2] Training-aware Low Precision Quantization in Spiking Neural Networks
    Shymyrbay, Ayan
    Fouda, Mohammed E.
    Eltawil, Ahmed
    [J]. 2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1147 - 1151
  • [3] SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks
    Venkatesh, Sreyes
    Marinescu, Razvan
    Eshraghian, Jason K.
    [J]. 2024 NEURO INSPIRED COMPUTATIONAL ELEMENTS CONFERENCE, NICE, 2024,
  • [4] Mixed-precision quantization-aware training for photonic neural networks
    Kirtas, Manos
    Passalis, Nikolaos
    Oikonomou, Athina
    Moralis-Pegios, Miltos
    Giamougiannis, George
    Tsakyridis, Apostolos
    Mourgias-Alexandris, George
    Pleros, Nikolaos
    Tefas, Anastasios
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (29): : 21361 - 21379
  • [5] Mixed-precision quantization-aware training for photonic neural networks
    Manos Kirtas
    Nikolaos Passalis
    Athina Oikonomou
    Miltos Moralis-Pegios
    George Giamougiannis
    Apostolos Tsakyridis
    George Mourgias-Alexandris
    Nikolaos Pleros
    Anastasios Tefas
    [J]. Neural Computing and Applications, 2023, 35 : 21361 - 21379
  • [6] A Robust, Quantization-Aware Training Method for Photonic Neural Networks
    Oikonomou, A.
    Kirtas, M.
    Passalis, N.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2022, 2022, 1600 : 427 - 438
  • [7] Approximation- and Quantization-Aware Training for Graph Neural Networks
    Novkin, Rodion
    Klemme, Florian
    Amrouch, Hussam
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 599 - 612
  • [8] Quantization-Aware Training of Spiking Neural Networks for Energy-Efficient Spectrum Sensing on Loihi Chip
    Liu, Shiya
    Mohammadi, Nima
    Yi, Yang
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2024, 8 (02): : 827 - 838
  • [9] Overcoming Oscillations in Quantization-Aware Training
    Nagel, Markus
    Fournarakis, Marios
    Bondarenko, Yelysei
    Blankevoort, Tijmen
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] Quantization-Aware Interval Bound Propagation for Training Certifiably Robust Quantized Neural Networks
    Lechner, Mathias
    Zikelic, Dorde
    Chatterjee, Krishnendu
    Henzinger, Thomas A.
    Rus, Daniela
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 14964 - 14973