A low-power, high-accuracy with fully on-chip ternary weight hardware architecture for Deep Spiking Neural Networks

被引:5
|
作者
Duy-Anh Nguyen [1 ,2 ]
Xuan-Tu Tran [1 ]
Dang, Khanh N. [3 ]
Iacopi, Francesca [4 ]
机构
[1] Vietnam Natl Univ Hanoi VNU, VNU Informat Technol Inst, Hanoi 123106, Vietnam
[2] VNU UET, JTIRC, Hanoi, Vietnam
[3] Vietnam Natl Univ Hanoi VNU, VNU Key Lab Smart Integrated Syst SISLAB, VNU UET, Hanoi 123106, Vietnam
[4] Univ Technol Sydney, 15 Broadway, Ultimo, NSW 2007, Australia
关键词
Deep Spiking Neural Network; Neuromorphic; Ternary-weight quantization; Hardware implementation; EFFICIENT;
D O I
10.1016/j.micpro.2022.104458
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, Deep Spiking Neural Network (DSNN) has emerged as a promising neuromorphic approach for various AI-based applications, such as image classification, speech recognition, robotic control etc. on edge computing platforms. However, the state-of-the-art offline training algorithms for DSNNs are facing two major challenges. Firstly, many timesteps are required to reach comparable accuracy with traditional frame-based DNNs algorithms. Secondly, extensive memory requirements for weight storage make it impossible to store all the weights on-chip for DSNNs with many layers. Thus the inference process requires continue access to expensive off-chip memory, ultimately leading to performance degradation in terms of throughput and power consumption. In this work, we propose a hardware-friendly training approach for DSNN that allows the weights to be constrained to ternary format, hence reducing the memory footprints and the energy consumption. Software simulations on MNIST and CIFAR10 datasets have shown that our training approach could reach an accuracy of 97% for MNIST (3-layer fully connected networks) and 89.71% for CIFAR10 (VGG16). To demonstrate the energy efficiency of our approach, we have proposed a neural processing module to implement our trained DSNN. When implemented as a fixed, 3-layers fully-connected system, the system has reached at energy efficiency of 74nJ/image with a classification accuracy of 97% for MNIST dataset. We have also considered a scalable design to support more complex network topologies when we integrate the neural processing module with a 3D Network-on-Chip.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] High-speed, low-power, and configurable on-chip training acceleration platform for spiking neural networks
    Liu, Yijun
    Xu, Yujie
    Ye, Wujian
    Cui, Youfeng
    Zhang, Boning
    Lin, Wenjie
    APPLIED INTELLIGENCE, 2024, 54 (20) : 9655 - 9670
  • [2] Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware
    Diehl, Peter U.
    Zarrella, Guido
    Cassidy, Andrew
    Pedroni, Bruno U.
    Neftci, Emre
    2016 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2016,
  • [3] Unidirectional and hierarchical on-chip interconnected architecture for large-scale hardware spiking neural networks
    Liu, Junxiu
    Jiang, Dong
    Fu, Qiang
    Luo, Yuling
    Deng, Yaohua
    Qin, Sheng
    Zhang, Shunsheng
    NEUROCOMPUTING, 2024, 609
  • [4] A Low-Power Hardware Architecture for On-Line Supervised Learning in Multi-Layer Spiking Neural Networks
    Zheng, Nan
    Mazumder, Pinaki
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [5] Highway Connection for Low-Latency and High-Accuracy Spiking Neural Networks
    Zhang, Anguo
    Wu, Junyi
    Li, Xiumin
    Li, Hung Chun
    Gao, Yueming
    Pun, Sio Hang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (12) : 4579 - 4583
  • [6] A Low-Power SerDes for High-Speed On-Chip Networks
    Park, Dongjun
    Yoon, Junsub
    Kim, Jongsun
    PROCEEDINGS INTERNATIONAL SOC DESIGN CONFERENCE 2017 (ISOCC 2017), 2017, : 252 - 253
  • [7] Hardware-aware Model Architecture for Ternary Spiking Neural Networks
    Wu, Nai-Chun
    Chen, Tsu-Hsiang
    Huang, Chih-Tsun
    2023 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI-TSA/VLSI-DAT, 2023,
  • [8] High-Accuracy Low-Power Energy Metering Chip without External Crystal
    Wu, Boqiang
    Tan, Nianxiong
    Zhong, Shupeng
    Men, Changyou
    Huang, Sufang
    2018 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2018, : 1424 - 1429
  • [9] An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification
    Ju, Xiping
    Fang, Biao
    Yan, Rui
    Xu, Xiaoliang
    Tang, Huajin
    NEURAL COMPUTATION, 2020, 32 (01) : 182 - 204
  • [10] An On-Chip Learning, Low-Power Probabilistic Spiking Neural Network with Long-Term Memory
    Hsieh, Hung-Yi
    Tang, Kea-Tiong
    2013 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS), 2013, : 5 - 8