A Bit-Precision Reconfigurable Digital In-Memory Computing Macro for Energy-Efficient Processing of Artificial Neural Networks

被引:0
|
作者
Kim, Hyunjoon [1 ]
Chen, Qian [1 ]
Yoo, Taegeun [1 ]
Kim, Tony Tae-Hyoung [1 ]
Kim, Bongjin [1 ]
机构
[1] Nanyang Technol Univ Singapore, Sch Elect & Elect Engn, 50 Nanyang Ave, Singapore 639798, Singapore
关键词
multiply and accumulate; artificial neural network; in-memory computing; reconfigurable accelerator;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we propose an in-memory computing macro with 1-16b digital reconfigurable bit-precision for energy-efficient processing of artificial neural networks. The proposed macro consists of 16K (128x128) bitcells. Each bitcell comprises of three functional blocks including a standard 6T SRAM cell for storing a binary weight, an XNOR gate as a bitwise multiplier, and a full-adder for bitwise addition. The digital bitcell array can be reconfigured into parallel row neurons, each with 128 column-shape multiply-and-accumulate (column-MAC) units placed in a row. A 65nm test-chip is fabricated, and the measured energy-efficiency for 1-to-16bit precision is 117.3-to-2.06TOPS/W.
引用
收藏
页码:166 / 167
页数:2
相关论文
共 50 条
  • [1] Bit Parallel 6T SRAM In-memory Computing with Reconfigurable Bit-Precision
    Lee, Kyeongho
    Jeong, Jinho
    Cheon, Sungsoo
    Choi, Woong
    Park, Jongsun
    [J]. PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,
  • [2] BitBlade: Energy-Efficient Variable Bit-Precision Hardware Accelerator for Quantized Neural Networks
    Ryu, Sungju
    Kim, Hyungjun
    Yi, Wooseok
    Kim, Eunhwan
    Kim, Yulhwa
    Kim, Taesu
    Kim, Jae-Joon
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2022, 57 (06) : 1924 - 1935
  • [3] Vesti: Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks
    Yin, Shihui
    Jiang, Zhewei
    Kim, Minkyu
    Gupta, Tushar
    Seok, Mingoo
    Seo, Jae-Sun
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (01) : 48 - 61
  • [4] In-Memory Computing: Towards Energy-Efficient Artificial Intelligence
    Le Gallo, Manuel
    Sebastian, Abu
    Eleftheriou, Evangelos
    [J]. ERCIM NEWS, 2018, (115): : 44 - 45
  • [5] Z-PIM: A Sparsity-Aware Processing-in-Memory Architecture With Fully Variable Weight Bit-Precision for Energy-Efficient Deep Neural Networks
    Kim, Ji-Hoon
    Lee, Juhyoung
    Lee, Jinsu
    Heo, Jaehoon
    Kim, Joo-Young
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2021, 56 (04) : 1093 - 1104
  • [6] Energy-Efficient In-Memory Database Computing
    Lehner, Wolfgang
    [J]. DESIGN, AUTOMATION & TEST IN EUROPE, 2013, : 470 - 474
  • [7] An Energy-Efficient Hybrid SRAM-Based In-Memory Computing Macro for Artificial Intelligence Edge Devices
    Rajput, Anil Kumar
    Tiwari, Alok Kumar
    Pattanaik, Manisha
    [J]. CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2023, 42 (06) : 3589 - 3616
  • [8] An Energy-Efficient Hybrid SRAM-Based In-Memory Computing Macro for Artificial Intelligence Edge Devices
    Anil Kumar Rajput
    Alok Kumar Tiwari
    Manisha Pattanaik
    [J]. Circuits, Systems, and Signal Processing, 2023, 42 : 3589 - 3616
  • [9] A 1-16b Precision Reconfigurable Digital In-Memory Computing Macro Featuring Column-MAC Architecture and Bit-Serial Computation
    Kim, Hyunjoon
    Chen, Qian
    Yoo, Taegeun
    Kim, Tony Tae-Hyoung
    Kim, Bongjin
    [J]. IEEE 45TH EUROPEAN SOLID STATE CIRCUITS CONFERENCE (ESSCIRC 2019), 2019, : 345 - 348
  • [10] Reconfigurable logic in nanosecond Cu/GeTe/TiN filamentary memristors for energy-efficient in-memory computing
    Jin, Miao-Miao
    Cheng, Long
    Li, Yi
    Hu, Si-Yu
    Lu, Ke
    Chen, Jia
    Duan, Nian
    Wang, Zhuo-Rui
    Zhou, Ya-Xiong
    Chang, Ting-Chang
    Miao, Xiang-Shui
    [J]. NANOTECHNOLOGY, 2018, 29 (38)