An Empirical Fault Vulnerability Exploration of ReRAM-Based Process-in-Memory CNN Accelerators

被引:0
|
作者
Dorostkar, Aniseh [1 ]
Farbeh, Hamed [1 ]
Zarandi, Hamid R. [1 ]
机构
[1] Amirkabir Univ Technol, Tehran Polytech, Tehran 158754413, Iran
关键词
Circuit faults; Neural networks; Resistance; Random access memory; Virtual machine monitors; Matrix converters; Kernel; Convolutional neural networks (CNNs); fault vulnerability; hardware accelerators; processing-in-memory (PIM); resistive random-access memory (ReRAM); RRAM DEVICES;
D O I
10.1109/TR.2024.3405825
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Resistive random-access memory (ReRAM)-based processing-in-memory (PIM) accelerator is a promising platform for processing massively memory intensive matrix-vector multiplications of neural networks in parallel domain, due to its capability of analog computation, ultra-high density, near-zero leakage current, and nonvolatility. Despite many advantages, ReRAM-based accelerators are highly error-prone due to limitations of technology fabrication that lead to process variations and defects. These limitations degrade the accuracy of deep convolutional neural networks (CNNs) (Deep CNNs) running on PIM accelerators. While these CNNs accelerators are widely deployed in safety-critical systems, their vulnerability to fault is not well explored. In this article, we have developed a fault-injection framework to investigate the vulnerability of large-scale CNNs at both software- and hardware-level of inference phases. Faulty ReRAM devices are another reliability challenges due to significant degradation of classification accuracy when CNN parameters are mapped to the accelerators. To investigate this challenge, we map the CNN learning parameter to the ReRAM crossbar and inject faults into crossbar arrays. The proposed framework analyzes the impact of stuck-at high (SaH) and stuck-at low (SaL) fault models on different layers and locations of CNN learning parameters. By performing extensive fault injections, we illustrate that the vulnerability behavior of ReRAM-based PIM accelerator for CNNs is greatly impressible to the types and depth of layers, the location of the learning parameter in every layer, and the value and types of faults. Our observations show that different models have different vulnerabilities to faults. Specifically, we show that SaL further reduces classification accuracy than SaH.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [31] A Quantized Training Framework for Robust and Accurate ReRAM-based Neural Network Accelerators
    Zhang, Chenguang
    Zhou, Pingqiang
    2021 26TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2021, : 43 - 48
  • [32] On-Line Fault Protection for ReRAM-Based Neural Networks
    Li, Wen
    Wang, Ying
    Liu, Cheng
    He, Yintao
    Liu, Lian
    Li, Huawei
    Li, Xiaowei
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (02) : 423 - 437
  • [33] AtomLayer: A Universal ReRAM-Based CNN Accelerator with Atomic Layer Computation
    Qiao, Ximing
    Cao, Xiong
    Yang, Huanrui
    Song, Linghao
    Li, Hai
    2018 55TH ACM/ESDA/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2018,
  • [34] A ReRAM-Based Processing-In-Memory Architecture for Hyperdimensional Computing
    Liu, Cong
    Wu, Kaibo
    Liu, Haikun
    Jin, Hai
    Liao, Xiaofei
    Duan, Zhuohui
    Xu, Jiahong
    Li, Huize
    Zhang, Yu
    Yang, Jing
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2025, 44 (02) : 512 - 524
  • [35] ReRAM-based Processing-in-Memory Architecture for Blockchain Platforms
    Wang, Fang
    Shen, Zhaoyan
    Han, Lei
    Shao, Zili
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 615 - 620
  • [36] ReRAM-based In-Memory Computation of Galois Field arithmetic
    Mandal, Swagata
    Bhattacharjee, Debjyoti
    Tavva, Yaswanth
    Chattopadhyay, Anupam
    PROCEEDINGS OF THE 2018 26TH IFIP/IEEE INTERNATIONAL CONFERENCE ON VERY LARGE SCALE INTEGRATION (VLSI-SOC), 2018, : 1 - 6
  • [37] Efficient Process-in-Memory Architecture Design for Unsupervised GAN-based Deep Learning using ReRAM
    Chen, Fan
    Song, Linghao
    Li, Hai Helen
    GLSVLSI '19 - PROCEEDINGS OF THE 2019 ON GREAT LAKES SYMPOSIUM ON VLSI, 2019, : 423 - 428
  • [38] FullReuse: A Novel ReRAM-based CNN Accelerator Reusing Data in Multiple Levels
    Luo, Changhang
    Diao, Jietao
    Chen, Changlin
    2020 THE 5TH IEEE INTERNATIONAL CONFERENCE ON INTEGRATED CIRCUITS AND MICROSYSTEMS (ICICM 2020), 2020, : 177 - 183
  • [39] Quarry: Quantization-based ADC Reduction for ReRAM-based Deep Neural Network Accelerators
    Azamat, Azat
    Asim, Faaiz
    Lee, Jongeun
    2021 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN (ICCAD), 2021,
  • [40] ReHarvest: An ADC Resource-Harvesting Crossbar Architecture for ReRAM-Based DNN Accelerators
    Xu, Jiahong
    Li, Haikun
    Duan, Zhuohui
    Liao, Xiaofei
    Jin, Hai
    Yang, Xiaokang
    Li, Huize
    Liu, Cong
    Mao, Fubing
    Zhang, Yu
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2024, 21 (03)