NUTS-BSNN: A non-uniform time-step binarized spiking neural network with energy-efficient in-memory computing macro

被引:2
|
作者
Dinh, Van-Ngoc [1 ]
Bui, Ngoc-My [1 ]
Nguyen, Van-Tinh [2 ]
John, Deepu [3 ]
Lin, Long-Yang [4 ]
Trinh, Quang-Kien [5 ]
机构
[1] Acad Mil Sci & Technol, Hanoi, Vietnam
[2] Nara Inst Sci & Technol, Nara, Japan
[3] Univ Coll Dublin, Dublin, Ireland
[4] Southern Univ Sci & Technol, Sch Microelect, Shenzhen, Peoples R China
[5] Le Quy Don Tech Univ, Fac Radioelect, Hanoi, Vietnam
基金
中国国家自然科学基金;
关键词
Neuromorphic Computing; Binary Spiking Neural Networks; In -memory Computing; Edge-AI Applications; 3RD-GENERATION;
D O I
10.1016/j.neucom.2023.126838
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work introduces a network architecture NUTS-BSNN: A Non-uniform Time-step Binarized Spiking Neural Network. NUTS-BSNN is a fully binarized spiking neural network with all binary weights, including the input and output layers. In the input and output layers, the weights are represented as stochastic series of numbers, while in the hidden layers, they are approximated to binary values for using simple XNOR-based computations. To compensate for the information loss due to binarization, we increased the convolutions at the input layer sequentially computed over multiple time-steps. The results from these operations are accumulated before generating spikes for the subsequent layers to increase the overall performance. We chose 14 time-steps for accumulation to achieve a good tradeoff between performance and inference latency. The proposed technique was evaluated using three datasets by direct training method and using a surrogate gradient algorithm. We achieved classification accuracies of 93.25%, 88.71%, and 70.31% on the Fashion-MNIST, CIFAR-10, and CIFAR100 datasets, respectively. Further, we present an in-memory computing architecture for NUTS-BSNN, which limits resource and power consumption for hardware implementation.
引用
收藏
页数:12
相关论文
共 7 条
  • [1] XNOR-BSNN: In-Memory Computing Model for Deep Binarized Spiking Neural Network
    Nguyen, Van-Tinh
    Quang-Kien Trinh
    Zhang, Renyuan
    Nakashima, Yasuhiko
    [J]. 2021 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE BIG DATA AND INTELLIGENT SYSTEMS (HPBD&IS), 2021, : 17 - 21
  • [2] IMC: Energy-Efficient In-Memory Convolver for Accelerating Binarized Deep Neural Network
    Angizi, Shaahin
    Fan, Deliang
    [J]. PROCEEDINGS OF NEUROMORPHIC COMPUTING SYMPOSIUM (NCS 2017), 2017,
  • [3] An Energy-Efficient Time Domain Based Compute In-Memory Architecture for Binary Neural Network
    Chakraborty, Subhradip
    Kushwaha, Dinesh
    Goel, Abhishek
    Singla, Anmol
    Bulusu, Anand
    Dasgupta, Sudeb
    [J]. 2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [4] A Bit-Precision Reconfigurable Digital In-Memory Computing Macro for Energy-Efficient Processing of Artificial Neural Networks
    Kim, Hyunjoon
    Chen, Qian
    Yoo, Taegeun
    Kim, Tony Tae-Hyoung
    Kim, Bongjin
    [J]. 2019 INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2019, : 166 - 167
  • [5] Time-Domain Computing in Memory Using Spintronics for Energy-Efficient Convolutional Neural Network
    Zhang, Yue
    Wang, Jinkai
    Lian, Chenyu
    Bai, Yining
    Wang, Guanda
    Zhang, Zhizhong
    Zheng, Zhenyi
    Chen, Lei
    Zhang, Kun
    Sirakoulis, Georgios
    Zhang, Youguang
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (03) : 1193 - 1205
  • [6] An Energy-Efficient and High Throughput in-Memory Computing Bit-Cell With Excellent Robustness Under Process Variations for Binary Neural Network
    Saha, Gobinda
    Jiang, Zhewei
    Parihar, Sanjay
    Cao, Xi
    Higman, Jack
    Ul Karim, Muhammed Ahosan
    [J]. IEEE ACCESS, 2020, 8 : 91405 - 91414
  • [7] An Area- and Energy-Efficient Spiking Neural Network With Spike-Time-Dependent Plasticity Realized With SRAM Processing-in-Memory Macro and On-Chip Unsupervised Learning
    Liu, Shuang
    Wang, J. J.
    Zhou, J. T.
    Hu, S. G.
    Yu, Q.
    Chen, T. P.
    Liu, Y.
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, 2023, 17 (01) : 92 - 104