RQ-DNN: Reliable Quantization for Fault-tolerant Deep Neural Networks

被引:1
|
作者
Choi, Insu [1 ]
Hong, Jae-Youn [1 ]
Jeon, JaeHwa [1 ]
Yang, Joon-Sung [1 ,2 ]
机构
[1] Yonsei Univ, Dept Elect & Elect Engn, Seoul, South Korea
[2] Yonsei Univ, Dept Syst Semicond Engn, Seoul, South Korea
关键词
DNN; Fault-Tolerance; Reliability; Quantization; Machine Learning;
D O I
10.1109/DAC56929.2023.10247670
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) are deployed in many real-time and safety-critical applications such as autonomous vehicles and medical diagnosis. In such applications, quantization is used to compress the model for storage and computation reduction. However, recent research has shown that faults in memory can cause a significant drop in DNN accuracy and conventional quantization methods focus only on model compression. This paper proposes a novel method that performs model quantization while remarkably improving the fault-tolerance of the model. It can be incorporated with other hardware approaches such as Error Correcting Code to further improve fault-tolerance. The proposed method reduces possible error patterns that negatively impact classification accuracy by modifying weight distributions and applying a novel masking-based clipping function. Experimental results show that the proposed method enhances the fault-tolerance of the quantized DNN, which can tolerate 1803x higher bit error rates than the conventional method.
引用
收藏
页数:2
相关论文
共 50 条
  • [1] Bipolar Vector Classifier for Fault-tolerant Deep Neural Networks
    Lee, Suyong
    Choi, Insu
    Yang, Joon-Sung
    PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 673 - 678
  • [2] Evolving Fault-Tolerant Neural Networks
    Zhi-Hua Zhou
    Shi-Fu Chen
    Neural Computing & Applications, 2003, 11 : 156 - 160
  • [3] Evolving fault-tolerant neural networks
    Zhou, ZH
    Chen, SF
    NEURAL COMPUTING & APPLICATIONS, 2003, 11 (3-4): : 156 - 160
  • [4] Design of Fault-Tolerant and Reliable Networks-on-Chip
    Wang, Junshi
    Ebrahimi, Masoumeh
    Huang, Letian
    Jantsch, Axel
    Li, Guangjun
    2015 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI, 2015, : 545 - 550
  • [5] Online Quantization Adaptation for Fault-Tolerant Neural Network Inference
    Beyer, Michael
    Borrmann, Jan Micha
    Guntoro, Andre
    Blume, Holger
    COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2023, 2023, 14181 : 243 - 256
  • [6] Study on the simulator of fault-tolerant neural networks
    Shi, Jun
    Chen, Youping
    Xu, Haiyin
    Zhou, Zude
    Xitong Fangzhen Xuebao/Journal of System Simulation, 1998, 10 (06): : 60 - 64
  • [7] Pruning of Deep Neural Networks for Fault-Tolerant Memristor-based Accelerators
    Chen, Ching-Yuan
    Chakrabarty, Krishnendu
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 889 - 894
  • [8] Filling the Gap: Fault-Tolerant Updates of On-Satellite Neural Networks Using Vector Quantization
    Kondrateva, Olga
    Dietzel, Stefan
    Schambach, Maximilian
    Otterbach, Johannes
    Scheuermann, Bjoern
    2023 IFIP NETWORKING CONFERENCE, IFIP NETWORKING, 2023,
  • [9] Emerging technologies - Reliable and fault-tolerant wireless sensor networks
    23rd IEEE VLSI Test Symposium, Proceedings, 2005, : 173 - 173
  • [10] A blueprint for precise and fault-tolerant analog neural networks
    Demirkiran, Cansu
    Nair, Lakshmi
    Bunandar, Darius
    Joshi, Ajay
    NATURE COMMUNICATIONS, 2024, 15 (01)