Ergodic Approximate Deep Learning Accelerators

被引:0
|
作者
van Lijssel, Tim [1 ]
Balatsoukas-Stimming, Alexios [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
关键词
D O I
10.1109/IEEECONF59524.2023.10477076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As deep neural networks (DNN) continue to grow in size and complexity, the demand for greater computational power and efficiency leads to the need for smaller transistors in DNN accelerators. However, chip miniaturization presents new challenges, such as an increased likelihood of fabrication-induced faults due to process variations. This study seeks to investigate the impact of these hard faults on the classification accuracy, by injecting faults in the memory of a bit-accurate emulator of a DNN accelerator. Our initial results show that there is a large quality spread between different chips and that a minimum performance cannot be guaranteed, due to the non-ergodic behavior of hard faults. Therefore, two mitigation strategies are explored, to reduce the quality spread and to provide a reliable minimum performance over all chips. The first strategy works by shifting individual words, minimizing the error per word, while the second strategy works by blindly shifting the complete memory, randomizing faults over the memory. Results show that both methods reduce the quality spread, while using fewer resources compared to traditional approaches such as ECC.
引用
收藏
页码:734 / 738
页数:5
相关论文
共 50 条
  • [41] Deep Learning at Scale on NVIDIA V100 Accelerators
    Xu, Rengan
    Han, Frank
    Ta, Quy
    PROCEEDINGS OF 2018 IEEE/ACM PERFORMANCE MODELING, BENCHMARKING AND SIMULATION OF HIGH PERFORMANCE COMPUTER SYSTEMS (PMBS 2018), 2018, : 23 - 32
  • [42] FIdelity: Efficient Resilience Analysis Framework for Deep Learning Accelerators
    He, Yi
    Balaprakash, Prasanna
    Li, Yanjing
    2020 53RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 2020), 2020, : 270 - 281
  • [43] Co-designed Systems for Deep Learning Hardware Accelerators
    Brooks, David M.
    2018 INTERNATIONAL SYMPOSIUM ON VLSI TECHNOLOGY, SYSTEMS AND APPLICATION (VLSI-TSA), 2018,
  • [44] Kernel Mapping Techniques for Deep Learning Neural Network Accelerators
    Ozdemir, Sarp
    Khasawneh, Mohammad
    Rao, Smriti
    Madden, Patrick H.
    ISPD'22: PROCEEDINGS OF THE 2022 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, 2022, : 21 - 28
  • [45] Enhancing Collective Communication in MCM Accelerators for Deep Learning Training
    Laskar, Sabuj
    Majhi, Pranati
    Kim, Sungkeun
    Mahmud, Farabi
    Muzahid, Abdullah
    Kim, Eun Jung
    2024 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA 2024, 2024, : 832 - 847
  • [47] Rapid Emulation of Approximate DNN Accelerators
    Farahbakhsh, Amirreza
    Hosseini, Seyedmehdi
    Kachuee, Sajjad
    Sharilkhani, Mohammad
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [48] Impact of Approximate Multipliers on VGG Deep Learning Network
    Hammad, Issam
    El-Sankary, Kamal
    IEEE ACCESS, 2018, 6 : 60438 - 60444
  • [49] DeepSPACE: Approximate Geospatial Query Processing with Deep Learning
    Vorona, Dimitri
    Kipf, Andreas
    Neumann, Thomas
    Kemper, Alfons
    27TH ACM SIGSPATIAL INTERNATIONAL CONFERENCE ON ADVANCES IN GEOGRAPHIC INFORMATION SYSTEMS (ACM SIGSPATIAL GIS 2019), 2019, : 500 - 503
  • [50] Approximate Estimation of the Nutritions of Consumed Food by Deep Learning
    Aydilek, Ibrahim Berkan
    2017 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ENGINEERING (UBMK), 2017, : 160 - 164