SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron

被引:65
|
作者
Mozafari, Milad [1 ,2 ]
Ganjtabesh, Mohammad [1 ]
Nowzari-Dalini, Abbas [1 ]
Masquelier, Timothee [2 ]
机构
[1] Univ Tehran, Sch Math Stat & Comp Sci, Dept Comp Sci, Tehran, Iran
[2] Univ Toulouse 3, CNRS, CERCO UMR 5549, Toulouse, France
关键词
convolutional spiking neural networks; time-to-first-spike coding; one spike per neuron; STDP; reward-modulated STDP; tensor-based computing; GPU acceleration; VISUAL FEATURES;
D O I
10.3389/fnins.2019.00625
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient. Unlike the non-spiking counterparts, most of the existing SNN simulation frameworks are not practically efficient enough for large-scale AI tasks. In this paper; we introduce SpykeTorch, an open-source high-speed simulation framework based on PyTorch. This framework simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme. In terms of learning rules, both spike-timing-dependent plasticity (STDP) and reward-modulated STDP (R-STDP) are implemented, but other rules could be implemented easily. Apart from the aforementioned properties, SpykeTorch is highly generic and capable of reproducing the results of various studies. Computations in the proposed framework are tensor-based and totally done by PyTorch functions, which in turn brings the ability of just-in-time optimization for running on CPUs, GPUs, or Multi-GPU platforms.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    [J]. INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2020, 30 (06)
  • [2] An energy and area-efficient spike frequency adaptable LIF neuron for spiking neural networks
    Mushtaq, Umayia
    Akram, Md. Waseem
    Prasad, Dinesh
    Islam, Aminul
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2024, 119
  • [3] Approaches to efficient simulation with spiking neural networks
    Connolly, CG
    Marian, I
    Reilly, RG
    [J]. CONNECTIONIST MODELS OF COGNITION AND PERCEPTION II, 2004, 15 : 231 - 240
  • [4] Training Deep Convolutional Spiking Neural Networks With Spike Probabilistic Global Pooling
    Lian, Shuang
    Liu, Qianhui
    Yan, Rui
    Pan, Gang
    Tang, Huajin
    [J]. NEURAL COMPUTATION, 2022, 34 (05) : 1170 - 1188
  • [5] An Analog Neuron Circuit for Spiking Convolutional Neural Networks Based on Flash Array
    Xiaofeng, Gu
    Yanhang, Liu
    Zhiguo, Yu
    Xiaoyu, Zhong
    Xuan, Chen
    Yi, Sun
    Hongbing, Pan
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2023, 45 (01) : 116 - 124
  • [6] Dynamic Spike Bundling for Energy-Efficient Spiking Neural Networks
    Krithivasan, Sarada
    Sen, Sanchari
    Venkataramani, Swagath
    Raghunathan, Anand
    [J]. 2019 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2019,
  • [7] Probabilistic Spike Propagation for Efficient Hardware Implementation of Spiking Neural Networks
    Nallathambi, Abinand
    Sen, Sanchari
    Raghunathan, Anand
    Chandrachoodan, Nitin
    [J]. FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [8] Efficient Neuron Architecture for FPGA-based Spiking Neural Networks
    Wan, Lei
    Luo, Yuling
    Song, Shuxiang
    Harkin, Jim
    Liu, Junxiu
    [J]. 2016 27TH IRISH SIGNALS AND SYSTEMS CONFERENCE (ISSC), 2016,
  • [9] A Spiking Deep Convolutional Neural Network Based on Efficient Spike Timing Dependent Plasticity
    Zhou, Xueqian
    Song, Zeyang
    Wu, Xi
    Yan, Rui
    [J]. 2020 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA (ICAIBD 2020), 2020, : 39 - 45
  • [10] Spike time displacement-based error backpropagation in convolutional spiking neural networks
    Mirsadeghi, Maryam
    Shalchian, Majid
    Kheradpisheh, Saeed Reza
    Masquelier, Timothee
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (21): : 15891 - 15906