Double MAC on a Cell: A 22-nm 8T-SRAM-Based Analog In-Memory Accelerator for Binary/Ternary Neural Networks Featuring Split Wordline

被引:0
|
作者
Tagata, Hiroto [1 ]
Sato, Takashi [1 ]
Awano, Hiromitsu [1 ]
机构
[1] Kyoto Univ, Grad Sch Informat, Dept Commun & Comp Engn, Kyoto 6068501, Japan
基金
日本科学技术振兴机构;
关键词
Random access memory; Circuits; Capacitors; Throughput; In-memory computing; Common Information Model (computing); Time-domain analysis; Convolutional neural networks; Microprocessors; Delays; Quantized neural network (QNN); analog computing-in-memory (CIM); static random access memory (SRAM); voltage-mode accumulation; multiply-and-accumulation (MAC); COMPUTING SRAM MACRO; BINARY;
D O I
10.1109/OJCAS.2024.3482469
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper proposes a novel 8T-SRAM based computing-in-memory (CIM) accelerator for the Binary/Ternary neural networks. The proposed split dual-port 8T-SRAM cell has two input ports, simultaneously performing two binary multiply-and-accumulate (MAC) operations on left and right bitlines. This approach enables a twofold increase in throughput without significantly increasing area or power consumption, since the area overhead for doubling throughput is only two additional WL wires compared to the conventional 8T-SRAM. In addition, the proposed circuit supports binary and ternary activation input, allowing flexible adjustment of high energy efficiency and high inference accuracy depending on the application. The proposed SRAM macro consists of a 128x128 SRAM array that outputs the MAC operation results of 96 binary/ternary inputs and 96x128 binary weights as 1-5 bit digital values. The proposed circuit performance was evaluated by post-layout simulation with the 22-nm process layout of the overall CIM macro. The proposed circuit is capable of high-speed operation at 1 GHz. It achieves a maximum area efficiency of 3320 TOPS/mm(2), which is 3.4x higher compared to existing research with a reasonable energy efficiency of 1471 TOPS/W. The simulated inference accuracies of the proposed circuit are 96.45%/97.67% for MNIST dataset with binary/ternary MLP model, and 86.32%/88.56% for CIFAR-10 dataset with binary/ternary VGG-like CNN model.
引用
收藏
页码:328 / 340
页数:13
相关论文
共 10 条
  • [1] Ternary In-Memory MAC Accelerator With Dual-6T SRAM Cell for Deep Neural Networks
    Wang, Xudong
    Li, Geng
    Sun, Jiacong
    Fan, Huanjie
    Chen, Yong
    Jiao, Hailong
    2022 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS, 2022, : 246 - 250
  • [2] Energy-Efficient In-Memory Binary Neural Network Accelerator Design Based on 8T2C SRAM Cell
    Oh, Hyunmyung
    Kim, Hyungjun
    Ahn, Daehyun
    Park, Jihoon
    Kim, Yulhwa
    Lee, Inhwan
    Kim, Jae-Joon
    IEEE SOLID-STATE CIRCUITS LETTERS, 2022, 5 : 70 - 73
  • [3] Energy-efficient charge sharing-based 8T2C SRAM in-memory accelerator for binary neural networks in 28nm CMOS
    Oh, Hyunmyung
    Kim, Hyungjun
    Ahn, Daehyun
    Park, Jihoon
    Kim, Yulhwa
    Lee, Inhwan
    Kim, Jae-Joon
    IEEE ASIAN SOLID-STATE CIRCUITS CONFERENCE (A-SSCC 2021), 2021,
  • [4] Single RRAM Cell-based In-Memory Accelerator Architecture for Binary Neural Networks
    Oh, Hyunmyung
    Kim, Hyungjun
    Kang, Nameun
    Kim, Yulhwa
    Park, Jihoon
    Kim, Jae-Joon
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [5] A Novel 8T XNOR-SRAM: Computing-in-Memory Design for Binary/Ternary Deep Neural Networks
    Alnatsheh, Nader
    Kim, Youngbae
    Cho, Jaeik
    Choi, Kyuwon Ken
    ELECTRONICS, 2023, 12 (04)
  • [6] A Charge Domain SRAM Compute-in-Memory Macro With C-2C Ladder-Based 8-Bit MAC Unit in 22-nm FinFET Process for Edge Inference
    Wang, Hechen
    Liu, Renzhi
    Dorrance, Richard
    Dasalukunte, Deepak
    Lake, Dan
    Carlton, Brent
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2023, 58 (04) : 1037 - 1050
  • [7] A Novel Ultra-Low Power 8T SRAM-Based Compute-in-Memory Design for Binary Neural Networks
    Kim, Youngbae
    Li, Shuai
    Yadav, Nandakishor
    Choi, Kyuwon Ken
    ELECTRONICS, 2021, 10 (17)
  • [8] A 22-nm 264-GOPS/mm2 6T SRAM and Proportional Current Compute Cell-Based Computing-in-Memory Macro for CNNs
    Liu, Feiran
    Yin, Anran
    Xue, Chen
    Wang, Bo
    Feng, Zhongyuan
    Liu, Han
    Li, Xiang
    Gao, Hui
    Xiong, Tianzhu
    Si, Xin
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2024, 32 (12) : 2389 - 2393
  • [9] IMPACT: A 1-to-4b 813-TOPS/W 22-nm FD-SOI Compute-in-Memory CNN Accelerator Featuring a 4.2-POPS/W 146-TOPS/mm2 CIM-SRAM With Multi-Bit Analog Batch-Normalization
    Kneip, Adrian
    Lefebvre, Martin
    Verecken, Julien
    Bol, David
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2023, 58 (07) : 1871 - 1884
  • [10] A 1-to-4b 16.8-POPS/W 473-TOPS/mm2 6T-based In-Memory Computing SRAM in 22nm FD-SOI with Multi-Bit Analog Batch-Normalization
    Kneip, Adrian
    Lefebvre, Martin
    Verecken, Julien
    Bol, David
    ESSCIRC 2022- IEEE 48TH EUROPEAN SOLID STATE CIRCUITS CONFERENCE (ESSCIRC), 2022, : 157 - 160