Ergodic Approximate Deep Learning Accelerators

被引:0
|
作者
van Lijssel, Tim [1 ]
Balatsoukas-Stimming, Alexios [1 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
关键词
D O I
10.1109/IEEECONF59524.2023.10477076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As deep neural networks (DNN) continue to grow in size and complexity, the demand for greater computational power and efficiency leads to the need for smaller transistors in DNN accelerators. However, chip miniaturization presents new challenges, such as an increased likelihood of fabrication-induced faults due to process variations. This study seeks to investigate the impact of these hard faults on the classification accuracy, by injecting faults in the memory of a bit-accurate emulator of a DNN accelerator. Our initial results show that there is a large quality spread between different chips and that a minimum performance cannot be guaranteed, due to the non-ergodic behavior of hard faults. Therefore, two mitigation strategies are explored, to reduce the quality spread and to provide a reliable minimum performance over all chips. The first strategy works by shifting individual words, minimizing the error per word, while the second strategy works by blindly shifting the complete memory, randomizing faults over the memory. Results show that both methods reduce the quality spread, while using fewer resources compared to traditional approaches such as ECC.
引用
收藏
页码:734 / 738
页数:5
相关论文
共 50 条
  • [1] Approximate Adders for Deep Neural Network Accelerators
    Raghuram, S.
    Shashank, N.
    2022 35TH INTERNATIONAL CONFERENCE ON VLSI DESIGN (VLSID 2022) HELD CONCURRENTLY WITH 2022 21ST INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS (ES 2022), 2022, : 210 - 215
  • [2] Parallelism in Deep Learning Accelerators
    Song, Linghao
    Chen, Fan
    Chen, Yiran
    Li, Hai Helen
    2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 645 - 650
  • [3] Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey
    Armeniakos, Giorgos
    Zervakis, Georgios
    Soudris, Dimitrios
    Henkel, Joerg
    ACM COMPUTING SURVEYS, 2023, 55 (04)
  • [4] Hardware Accelerators for Deep Reinforcement Learning
    Mishra, Vinod K.
    Basu, Kanad
    Arunachalam, Ayush
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [5] AdequateDL: Approximating Deep Learning Accelerators
    Sentieys, Olivier
    Filip, Silviu
    Briand, David
    Novo, David
    Dupuis, Etienne
    O'Connor, Ian
    Bosio, Alberto
    2021 24TH INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS & SYSTEMS (DDECS), 2021, : 37 - 40
  • [6] Exploiting deep learning accelerators for neuromorphic workloads
    Sun, Pao-Sheng Vincent
    Titterton, Alexander
    Gopiani, Anjlee
    Santos, Tim
    Basu, Arindam
    Lu, Wei D.
    Eshraghian, Jason K.
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2024, 4 (01):
  • [7] Assembly language and assembler for deep learning accelerators
    Lan H.
    Wu L.
    Han D.
    Du Z.
    High Technology Letters, 2019, 25 (04): : 386 - 394
  • [8] Data Orchestration in Deep Learning Accelerators Krishna
    Krishna T.
    Kwon H.
    Parashar A.
    Pellauer M.
    Samajdar A.
    Synthesis Lectures on Computer Architecture, 2020, 15 (03): : 1 - 164
  • [9] Scratchpad Memory Management for Deep Learning Accelerators
    Zouzoula, Stavroula
    Maleki, Mohammad Ali
    Azhar, Muhammad Waqar
    Trancoso, Pedro
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 629 - 639
  • [10] Deep learning accelerators: a case study with MAESTRO
    Bolhasani, Hamidreza
    Jassbi, Somayyeh Jafarali
    JOURNAL OF BIG DATA, 2020, 7 (01)