Device Variation Effects on Neural Network Inference Accuracy in Analog In-Memory Computing Systems

被引:14
|
作者
Wang, Qiwen [1 ]
Park, Yongmo [1 ]
Lu, Wei D. [1 ]
机构
[1] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
analog computing; deep neural networks; emerging memory; in-memory computing; process-in-memory; RRAM; MEMRISTOR; NOISE;
D O I
10.1002/aisy.202100199
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In analog in-memory computing systems based on nonvolatile memories such as resistive random-access memory (RRAM), neural network models are often trained offline and then the weights are programmed onto memory devices as conductance values. The programmed weight values inevitably deviate from the target values during the programming process. This effect can be pronounced for emerging memories such as RRAM, PcRAM, and MRAM due to the stochastic nature during programming. Unlike noise, these weight deviations do not change during inference. The performance of neural network models is investigated against this programming variation under realistic system limitations, including limited device on/off ratios, memory array size, analog-to-digital converter (ADC) characteristics, and signed weight representations. Approaches to mitigate such device and circuit nonidealities through architecture-aware training are also evaluated. The effectiveness of variation injection during training to improve the inference robustness, as well as the effects of different neural network training parameters such as learning rate schedule, will be discussed.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Analog Neural Network Inference Accuracy in One-Selector One-Resistor Memory Arrays
    Kim, Joshua E.
    Xiao, T. Patrick
    Bennett, Christopher H.
    Wilson, Donald
    Spear, Matthew
    Siath, Maximilian
    Feinberg, Ben
    Agarwal, Sapan
    Marinella, Matthew J.
    2022 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING, ICRC, 2022, : 7 - 12
  • [32] A Compressed Spiking Neural Network Onto a Memcapacitive In-Memory Computing Array
    Oshio, Reon
    Sugahara, Takuya
    Sawada, Atsushi
    Kimura, Mutsumi
    Zhang, Renyuan
    Nakashima, Yasuhiko
    IEEE MICRO, 2024, 44 (01) : 8 - 16
  • [33] A Parallel Randomized Neural Network on In-memory Cluster Computing for Big Data
    Dai, Tongwu
    Li, Kenli
    Chen, Cen
    2017 13TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), 2017,
  • [34] Resistive-RAM-Based In-Memory Computing for Neural Network: A Review
    Chen, Weijian
    Qi, Zhi
    Akhtar, Zahid
    Siddique, Kamran
    ELECTRONICS, 2022, 11 (22)
  • [35] A Skyrmion Racetrack Memory based Computing In-memory Architecture for Binary Neural Convolutional Network
    Pan, Yu
    Ouyang, Peng
    Zhao, Yinglin
    Yin, Shouyi
    Zhang, Youguang
    Wei, Shaojun
    Zhao, Weisheng
    GLSVLSI '19 - PROCEEDINGS OF THE 2019 ON GREAT LAKES SYMPOSIUM ON VLSI, 2019, : 271 - 274
  • [36] CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation
    Eldebiky, Amro
    Zhang, Grace Li
    Boecherer, Georg
    Li, Bing
    Schlichtmann, Ulf
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [37] Combined HW/SW Drift and Variability Mitigation for PCM-Based Analog In-Memory Computing for Neural Network Applications
    Antolini, Alessio
    Paolino, Carmine
    Zavalloni, Francesco
    Lico, Andrea
    Scarselli, Eleonora Franchi
    Mangia, Mauro
    Pareschi, Fabio
    Setti, Gianluca
    Rovatti, Riccardo
    Torres, Mattia Luigi
    Carissimi, Marcella
    Pasotti, Marco
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2023, 13 (01) : 395 - 407
  • [38] Achieving Accurate In-Memory Neural Network Inference with Highly Overlapping Nonvolatile Memory State Distributions
    Marinella, Matthew J.
    Xiao, T. Patrick
    Feinberg, Ben
    Bennett, Chris
    Agrawal, Vineet
    Puchner, Helmut
    Agarwal, Sapan
    6TH IEEE ELECTRON DEVICES TECHNOLOGY AND MANUFACTURING CONFERENCE (EDTM 2022), 2022, : 330 - 332
  • [39] Evaluating an Analog Main Memory Architecture for All-Analog In-Memory Computing Accelerators
    Adam, Kazybek
    Monga, Dipesh
    Numan, Omar
    Singh, Gaurav
    Halonen, Kari
    Andraud, Martin
    2024 IEEE 6TH INTERNATIONAL CONFERENCE ON AI CIRCUITS AND SYSTEMS, AICAS 2024, 2024, : 248 - 252
  • [40] An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN Inference on Mobile Devices
    Elbtity, Mohammed
    Singh, Abhishek
    Reidy, Brendan
    Guo, Xiaochen
    Zand, Ramtin
    2021 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2021), 2021, : 188 - 193