An Area- and Energy-Efficient Spiking Neural Network With Spike-Time-Dependent Plasticity Realized With SRAM Processing-in-Memory Macro and On-Chip Unsupervised Learning

被引:7
|
作者
Liu, Shuang [1 ]
Wang, J. J. [1 ]
Zhou, J. T. [1 ]
Hu, S. G. [1 ]
Yu, Q. [1 ]
Chen, T. P. [2 ]
Liu, Y. [1 ]
机构
[1] Univ Elect Sci & Technol China, State Key Lab Thin Solid Films & Integrated Device, Chengdu 610054, Peoples R China
[2] Nanyang Technol Univ, Singapore 639798, Singapore
关键词
Random access memory; Unsupervised learning; System-on-chip; Hardware; Capacitors; Biological neural networks; Supervised learning; MNIST; on-chip unsupervised learning; processing-in-memory (PIM); spiking neural network (SNN); SRAM; spike-time-dependent plasticity (STDP);
D O I
10.1109/TBCAS.2023.3242413
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
In this article, we present a spiking neural network (SNN) based on both SRAM processing-in-memory (PIM) macro and on-chip unsupervised learning with Spike-Time-Dependent Plasticity (STDP). Co-design of algorithm and hardware for hardware-friendly SNN and efficient STDP-based learning methodology is used to improve area and energy efficiency. The proposed macro utilizes charge sharing of capacitors to perform fully parallel Reconfigurable Multi-bit PIM Multiply-Accumulate (RMPMA) operations. A thermometer-coded Programmable High-precision PIM Threshold Generator (PHPTG) is designed to achieve low differential non-linearity (DNL) and high linearity. In the macro, each column of PIM cells and a comparator act as a neuron to accumulate membrane potential and fire spikes. A simplified Winner Takes All (WTA) mechanism is used in the proposed hardware-friendly architecture. By combining the hardware-friendly STDP algorithm as well as the parallel Word Lines (WLs) and Processing Bit Lines (PBLs), we realize unsupervised learning and recognize the Modified National Institute of Standards and Technology (MNIST) dataset. The chip for the hardware implementation was fabricated with a 55 nm CMOS process. The measurement shows that the chip achieves a learning efficiency of 0.47 nJ/pixel, with a learning energy efficiency of 70.38 TOPS/W. This work paves a pathway for the on-chip learning algorithm in PIM with lower power consumption and fewer hardware resources.
引用
收藏
页码:92 / 104
页数:13
相关论文
共 6 条
  • [1] Processing-In-Memory-Based On-Chip Learning With Spike-Time-Dependent Plasticity in 65-nm CMOS
    Kim, Daehyun
    She, Xueyuan
    Rahman, Nael Mizanur
    Chekuri, Venkata Chaitanya Krishna
    Mukhopadhyay, Saibal
    IEEE SOLID-STATE CIRCUITS LETTERS, 2020, 3 : 278 - 281
  • [2] Area- and Energy-Efficient STDP Learning Algorithm for Spiking Neural Network SoC
    Kim, Giseok
    Kim, Kiryong
    Choi, Sara
    Jang, Hyo Jung
    Jung, Seong-Ook
    IEEE ACCESS, 2020, 8 : 216922 - 216932
  • [3] Energy-efficient FPGA Spiking Neural Accelerators with Supervised and Unsupervised Spike-timing-dependent-Plasticity
    Liu, Yu
    Yenamachintala, Sai Sourabh
    Li, Peng
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2019, 15 (03)
  • [4] AERIS: Area/Energy-Efficient 1T2R ReRAM Based Processing-in-Memory Neural Network System-on-a-Chip
    Yue, Jinshan
    Liu, Yongpan
    Su, Fang
    Li, Shuangchen
    Yuan, Zhe
    Wang, Zhibo
    Sun, Wenyu
    Li, Xueqing
    Yang, Huazhong
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 146 - 151
  • [5] A Spiking Neural Network with a Global Self-Controller for Unsupervised Learning Based on Spike-Timing-Dependent Plasticity Using Flash Memory Synaptic Devices
    Kang, Won-Mook
    Kim, Chul-Heung
    Lee, Soochang
    Woo, Sung Yun
    Bae, Jong-Ho
    Park, Byung-Gook
    Lee, Jong-Ho
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] NUTS-BSNN: A non-uniform time-step binarized spiking neural network with energy-efficient in-memory computing macro
    Dinh, Van-Ngoc
    Bui, Ngoc-My
    Nguyen, Van-Tinh
    John, Deepu
    Lin, Long-Yang
    Trinh, Quang-Kien
    NEUROCOMPUTING, 2023, 560