Device Variation Effects on Neural Network Inference Accuracy in Analog In-Memory Computing Systems

被引:14
|
作者
Wang, Qiwen [1 ]
Park, Yongmo [1 ]
Lu, Wei D. [1 ]
机构
[1] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
analog computing; deep neural networks; emerging memory; in-memory computing; process-in-memory; RRAM; MEMRISTOR; NOISE;
D O I
10.1002/aisy.202100199
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In analog in-memory computing systems based on nonvolatile memories such as resistive random-access memory (RRAM), neural network models are often trained offline and then the weights are programmed onto memory devices as conductance values. The programmed weight values inevitably deviate from the target values during the programming process. This effect can be pronounced for emerging memories such as RRAM, PcRAM, and MRAM due to the stochastic nature during programming. Unlike noise, these weight deviations do not change during inference. The performance of neural network models is investigated against this programming variation under realistic system limitations, including limited device on/off ratios, memory array size, analog-to-digital converter (ADC) characteristics, and signed weight representations. Approaches to mitigate such device and circuit nonidealities through architecture-aware training are also evaluated. The effectiveness of variation injection during training to improve the inference robustness, as well as the effects of different neural network training parameters such as learning rate schedule, will be discussed.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Enabling Secure in-Memory Neural Network Computing by Sparse Fast Gradient Encryption
    Cai, Yi
    Chen, Xiaoming
    Tian, Lu
    Wang, Yu
    Yang, Huazhong
    [J]. 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2019,
  • [42] A large-scale in-memory computing for deep neural network with trained quantization
    Cheng, Yuan
    Wang, Chao
    Chen, Hai-Bao
    Yu, Hao
    [J]. INTEGRATION-THE VLSI JOURNAL, 2019, 69 : 345 - 355
  • [43] ReRAM-Based In-Memory Computing for Search Engine and Neural Network Applications
    Halawani, Yasmin
    Mohammad, Baker
    Abu Lebdeh, Muath
    Al-Qutayri, Mahmoud
    Al-Sarawi, Said E.
    [J]. IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2019, 9 (02) : 388 - 397
  • [44] Statistical Computing Framework and Demonstration for In-memory Computing Systems
    Zhang, Bonan
    Deaville, Peter
    Verma, Naveen
    [J]. PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 979 - 984
  • [45] Drift-tolerant implementation of a neural network on a PCM-based Analog In-memory Computing unit for motor control applications
    Zavalloni, F.
    Antolini, A.
    Torres, M. L.
    Nicolosi, A.
    D'Angelo, F.
    Lico, A.
    Scarselli, E. Franchi
    Pasotti, M.
    [J]. 2024 19TH CONFERENCE ON PH.D RESEARCH IN MICROELECTRONICS AND ELECTRONICS, PRIME 2024, 2024,
  • [46] Digital In-Memory Computing to Accelerate Deep Learning Inference on the Edge
    Perri, Stefania
    Zambelli, Cristian
    Ielmini, Daniele
    Silvano, Cristina
    [J]. 2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW 2024, 2024, : 130 - 133
  • [47] Analyzing the Effects of Noise and Variation on the Accuracy of Analog Neural Networks
    Janke, Devon
    Anderson, David, V
    [J]. 2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 150 - 153
  • [48] Error Resilient In-Memory Computing Architecture for CNN Inference on the Edge
    Rios, Marco
    Ponzina, Flavio
    Ansaloni, Giovanni
    Levisse, Alexandre
    Atienza, David
    [J]. PROCEEDINGS OF THE 32ND GREAT LAKES SYMPOSIUM ON VLSI 2022, GLSVLSI 2022, 2022, : 249 - 254
  • [49] MRAM-based Analog Sigmoid Function for In-memory Computing
    Amin, Md Hasibul
    Elbtity, Mohammed
    Mohammadi, Mohammadreza
    Zand, Ramtin
    [J]. PROCEEDINGS OF THE 32ND GREAT LAKES SYMPOSIUM ON VLSI 2022, GLSVLSI 2022, 2022, : 319 - 323
  • [50] Deep In-Memory Architectures in SRAM: An Analog Approach to Approximate Computing
    Kang, Mingu
    Gonugondla, Sujan K.
    Shanbhag, Naresh R.
    [J]. PROCEEDINGS OF THE IEEE, 2020, 108 (12) : 2251 - 2275