TFix: Exploiting the Natural Redundancy of Ternary Neural Networks for Fault Tolerant In-Memory Vector Matrix Multiplication

被引:1
|
作者
Malhotra, Akul [1 ]
Wang, Chunguang [1 ]
Gupta, Sumeet Kumar [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
关键词
In-Memory Computing; Vector Matrix Multiplication; Ternary Deep Neural Networks; Fault Tolerance;
D O I
10.1109/DAC56929.2023.10247835
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In-memory computing (IMC) and quantization have emerged as promising techniques for edge-based deep neural network (DNN) accelerators by reducing their energy, latency and storage requirements. In pursuit of ultra-low precision, ternary precision DNNs (TDNNs) offer high efficiency without sacrificing much inference accuracy. In this work, we explore the impact of hard faults on IMC based TDNNs and propose TFix to enhance their fault tolerance. TFix exploits the natural redundancy present in most ternary IMC bitcells as well as the high weight sparsity in TDNNs to provide up to 40.68% accuracy increase over the baseline with < 6% energy overhead.
引用
收藏
页数:6
相关论文
共 38 条
  • [31] Vertical AND-Type Flash TFT Array Capable of Accurate Vector-Matrix Multiplication Operations for Hardware Neural Networks
    Im, Jiseong
    Kim, Jangsaeng
    Hwang, Joon
    Park, Minkyu
    Koo, Ryun-Han
    Ko, Jonghyun
    Park, Sung-Ho
    Choi, Woo Young
    Lee, Jong-Ho
    IEEE ELECTRON DEVICE LETTERS, 2024, 45 (07) : 1385 - 1388
  • [32] Vector-Matrix-Multiplication Acceleration with Multi-Input Pr0.7Ca0.3MnO3 based RRAM for Highly Parallel In-Memory Computing
    Sakhuja, Jayatika
    Saraswat, Vivek
    Lashkare, Sandip
    Ganguly, Udayan
    2023 7TH IEEE ELECTRON DEVICES TECHNOLOGY & MANUFACTURING CONFERENCE, EDTM, 2023,
  • [33] Design of Advanced Fault-Tolerant Control System for Three-Phase Matrix Converter Using Artificial Neural Networks
    Ahmad, Faizan
    Alsuwian, Turki M.
    Amin, Arslan Ahmed
    Adnan, Muhammad
    Qadir, Muhammad Bilal
    IEEE ACCESS, 2023, 11 : 111506 - 111524
  • [34] High-Throughput, Area-Efficient, and Variation-Tolerant 3-D In-Memory Compute System for Deep Convolutional Neural Networks
    Veluri, Hasita
    Li, Yida
    Niu, Jessie Xuhua
    Zamburg, Evgeny
    Thean, Aaron Voon-Yew
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (11) : 9219 - 9232
  • [35] 1F-1T Array: Current Limiting Transistor Cascoded FeFET Memory Array for Variation Tolerant Vector-Matrix Multiplication Operation
    Sk, Masud Rana
    Thunder, Sunanda
    Muller, Franz
    Laleni, Nellie
    Raffel, Yannick
    Lederer, Maximilian
    Pirro, Luca
    Chohan, Talha
    Hsuen, Jing-Hua
    Wu, Tian-Li
    Seidel, Konrad
    Kampfe, Thomas
    De, Sourav
    Chakrabarti, Bhaswar
    IEEE TRANSACTIONS ON NANOTECHNOLOGY, 2023, 22 : 424 - 429
  • [36] Double MAC on a Cell: A 22-nm 8T-SRAM-Based Analog In-Memory Accelerator for Binary/Ternary Neural Networks Featuring Split Wordline
    Tagata, Hiroto
    Sato, Takashi
    Awano, Hiromitsu
    IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, 2024, 5 : 328 - 340
  • [37] TAC-RAM: A 65nm 4Kb SRAM Computing-in-Memory Design with 57.55 TOPS/W supporting Multibit Matrix-Vector Multiplication for Binarized Neural Network
    Wang, Xiaomeng
    Liu, Xuejiao
    Hu, Xianghong
    Zhong, Xiaopeng
    Chen, Xizi
    Liu, Yu
    Kong, Patrick
    Tian, Fengshi
    Tsui, Chiying
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 66 - 69
  • [38] A Novel Voltage-Accumulation Vector-Matrix Multiplication Architecture Using Resistor-shunted Floating Gate Flash Memory Device for Low-power and High-density Neural Network Applications
    Lin, Yu-Yu
    Lee, Feng-Min
    Lee, Ming-Hsiu
    Chen, Wei-Chen
    Lung, Hsiang-Lan
    Wang, Keh-Chung
    Lu, Chih-Yuan
    2018 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2018,