Ergodic Approximate Deep Learning Accelerators

被引:0
|
作者
van Lijssel, Tim [1 ]
Balatsoukas-Stimming, Alexios [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
关键词
D O I
10.1109/IEEECONF59524.2023.10477076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As deep neural networks (DNN) continue to grow in size and complexity, the demand for greater computational power and efficiency leads to the need for smaller transistors in DNN accelerators. However, chip miniaturization presents new challenges, such as an increased likelihood of fabrication-induced faults due to process variations. This study seeks to investigate the impact of these hard faults on the classification accuracy, by injecting faults in the memory of a bit-accurate emulator of a DNN accelerator. Our initial results show that there is a large quality spread between different chips and that a minimum performance cannot be guaranteed, due to the non-ergodic behavior of hard faults. Therefore, two mitigation strategies are explored, to reduce the quality spread and to provide a reliable minimum performance over all chips. The first strategy works by shifting individual words, minimizing the error per word, while the second strategy works by blindly shifting the complete memory, randomizing faults over the memory. Results show that both methods reduce the quality spread, while using fewer resources compared to traditional approaches such as ECC.
引用
收藏
页码:734 / 738
页数:5
相关论文
共 50 条
  • [21] AMAIX: A Generic Analytical Model for Deep Learning Accelerators
    Juenger, Lukas
    Zurstrassen, Niko
    Kogel, Tim
    Keding, Holger
    Leupers, Rainer
    EMBEDDED COMPUTER SYSTEMS: ARCHITECTURES, MODELING, AND SIMULATION, SAMOS 2020, 2020, 12471 : 36 - 51
  • [22] AccPar: Tensor Partitioning for Heterogeneous Deep Learning Accelerators
    Song, Linghao
    Chen, Fan
    Zhuo, Youwei
    Qian, Xuehai
    Li, Hai
    Chen, Yiran
    2020 IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2020), 2020, : 342 - 355
  • [23] A review of emerging trends in photonic deep learning accelerators
    Atwany, Mohammad
    Pardo, Sarah
    Serunjogi, Solomon
    Rasras, Mahmoud
    FRONTIERS IN PHYSICS, 2024, 12
  • [24] Survey of Deep Learning Accelerators for Edge and Emerging Computing
    Alam, Shahanur
    Yakopcic, Chris
    Wu, Qing
    Barnell, Mark
    Khan, Simon
    Taha, Tarek M.
    ELECTRONICS, 2024, 13 (15)
  • [25] A Machine Learning based Hard Fault Recuperation Model for Approximate Hardware Accelerators
    Taher, Farah Naz
    Callenes-Sloan, Joseph
    Schafer, Benjamin Carrion
    2018 55TH ACM/ESDA/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2018,
  • [26] Deep learning scheme for forward utilities using ergodic BSDEs
    Broux-Quemerais, Guillaume
    Kaakai, Sarah
    Matoussi, Anis
    Sabbagh, Wissal
    PROBABILITY UNCERTAINTY AND QUANTITATIVE RISK, 2024, 9 (02): : 152 - 183
  • [27] Evaluating Approximate Inference in Bayesian Deep Learning
    Wilson, Andrew Gordon
    Lotfi, Sanae
    Vikram, Sharad
    Hoffman, Matthew D.
    Gal, Yarin
    Li, Yingzhen
    Pradier, Melanie F.
    Foong, Andrew
    Farquhar, Sebastian
    Izmailov, Pavel
    NEURIPS 2021 COMPETITIONS AND DEMONSTRATIONS TRACK, VOL 176, 2021, 176 : 113 - 124
  • [28] Exploiting Approximate Computing for Deep Learning Acceleration
    Chen, Chia-Yu
    Choi, Jungwook
    Gopalakrishnan, Kailash
    Srinivasan, Viji
    Venkataramani, Swagath
    PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 821 - 826
  • [29] Learning Structured Gaussians to Approximate Deep Ensembles
    Simpson, Ivor J. A.
    Vicente, Sara
    Campbell, Neill D. F.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 366 - 374
  • [30] FPGA-Based Accelerators of Deep Learning Networks for Learning and Classification: A Review
    Shawahna, Ahmad
    Sait, Sadiq M.
    El-Maleh, Aiman
    IEEE ACCESS, 2019, 7 : 7823 - 7859