TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks

被引:22
|
作者
Jain, Shubham [1 ,2 ]
Gupta, Sumeet Kumar [1 ]
Raghunathan, Anand [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47906 USA
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY 10598 USA
关键词
AI hardware; in-memory computing; low-precision deep neural networks (DNNs); ternary dot-products; ternary neural networks;
D O I
10.1109/TVLSI.2020.2993045
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The use of lower precision has emerged as a popular technique to optimize the compute and storage requirements of complex deep neural networks (DNNs). In the quest for lower precision, recent studies have shown that ternary DNNs (which represent weights and activations by signed ternary values) represent a promising sweet spot, achieving accuracy close to full-precision networks on complex tasks. We propose TiM-DNN, a programmable in-memory accelerator that is specifically designed to execute ternary DNNs. TiM-DNN supports various ternary representations including unweighted {-1, 0, 1}, symmetric weighted {-a, 0, a}, and asymmetric weighted {- a, 0, b} ternary systems. The building blocks of TiM-DNN are TiM tiles-specialized memory arrays that perform massively parallel signed ternary vector-matrix multiplications with a single access. TiM tiles are in turn composed of ternary processing cells (TPCs), bit-cells that function as both ternary storage units and signed ternary multiplication units. We evaluate an implementation of TiM-DNN in 32-nm technology using an architectural simulator calibrated with SPICE simulations and RTL synthesis. We evaluate TiM-DNN across a suite of state-of-the-art DNN benchmarks including both deep convolutional and recurrent neural networks. A 32-tile instance of TiM-DNN achieves a peak performance of 114 TOPs/s, consumes 0.9-W power, and occupies 1.96 mm(2) chip area, representing a 300x and 388x improvement in TOPS/W and TOPS/mm(2), respectively, compared to an NVIDIA Tesla V100 GPU. In comparison to specialized DNN accelerators, TiM-DNN achieves 55x-240x and 160x-291x improvement in TOPS/W and TOPS/mm(2), respectively. Finally, when compared to a well-optimized near-memory accelerator for ternary DNNs, TiM-DNN demonstrates 3.9x-4.7x improvement in system-level energy and 3.2x-4.2x speedup, underscoring the potential of in-memory computing for ternary DNNs.
引用
收藏
页码:1567 / 1577
页数:11
相关论文
共 50 条
  • [1] FAT: An In-Memory Accelerator With Fast Addition for Ternary Weight Neural Networks
    Zhu, Shien
    Duong, Luan H. K.
    Chen, Hui
    Liu, Di
    Liu, Weichen
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (03) : 781 - 794
  • [2] Ternary In-Memory MAC Accelerator With Dual-6T SRAM Cell for Deep Neural Networks
    Wang, Xudong
    Li, Geng
    Sun, Jiacong
    Fan, Huanjie
    Chen, Yong
    Jiao, Hailong
    [J]. 2022 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS, 2022, : 246 - 250
  • [3] In-Memory Computing Based Hardware Accelerator Module for Deep Neural Networks
    Appukuttan, Allen
    Thomas, Emmanuel
    Nair, Harinandan R.
    Hemanth, S.
    Dhanaraj, K. J.
    Azeez, Maleeha Abdul
    [J]. 2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [4] Vesti: Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks
    Yin, Shihui
    Jiang, Zhewei
    Kim, Minkyu
    Gupta, Tushar
    Seok, Mingoo
    Seo, Jae-Sun
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (01) : 48 - 61
  • [5] Analog In-Memory Subthreshold Deep Neural Network Accelerator
    Fick, L.
    Blaauw, D.
    Sylvester, D.
    Skrzyniarz, S.
    Parikh, M.
    Fick, D.
    [J]. 2017 IEEE CUSTOM INTEGRATED CIRCUITS CONFERENCE (CICC), 2017,
  • [6] Noise tolerant ternary weight deep neural networks for analog in-memory inference
    Doevenspeck, Jonas
    Vrancx, Peter
    Laubeuf, Nathan
    Mallik, Arindam
    Debacker, Peter
    Verkest, Diederik
    Lauwereins, Rudy
    Dehaene, Wim
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Eidetic: An In-Memory Matrix Multiplication Accelerator for Neural Networks
    Eckert, Charles
    Subramaniyan, Arun
    Wang, Xiaowei
    Augustine, Charles
    Iyer, Ravishankar
    Das, Reetuparna
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (06) : 1539 - 1553
  • [8] XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks
    Jiang, Zhewei
    Yin, Shihui
    Seok, Mingoo
    Seo, Jae-sun
    [J]. 2018 IEEE SYMPOSIUM ON VLSI TECHNOLOGY, 2018, : 173 - 174
  • [9] XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks
    Yin, Shihui
    Jiang, Zhewei
    Seo, Jae-Sun
    Seok, Mingoo
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (06) : 1733 - 1743
  • [10] An MRAM-based Deep In-Memory Architecture for Deep Neural Networks
    Patil, Ameya D.
    Hua, Haocheng
    Gonugondla, Sujan
    Kang, Mingu
    Shanbhag, Naresh R.
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2019,