Fast and robust analog in-memory deep neural network training

被引:0
|
作者
Rasch, Malte J. [1 ,2 ]
Carta, Fabio [1 ]
Fagbohungbe, Omobayode [1 ]
Gokmen, Tayfun [1 ]
机构
[1] IBM Res, TJ Watson Res Ctr, Yorktown Hts, NY 10598 USA
[2] Sony AI, Zurich, Switzerland
关键词
DEVICES; CHIP;
D O I
10.1038/s41467-024-51221-z
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Analog in-memory computing is a promising future technology for efficiently accelerating deep learning networks. While using in-memory computing to accelerate the inference phase has been studied extensively, accelerating the training phase has received less attention, despite its arguably much larger compute demand to accelerate. While some analog in-memory training algorithms have been suggested, they either invoke significant amount of auxiliary digital compute-accumulating the gradient in digital floating point precision, limiting the potential speed-up-or suffer from the need for near perfectly programming reference conductance values to establish an algorithmic zero point. Here, we propose two improved algorithms for in-memory training, that retain the same fast runtime complexity while resolving the requirement of a precise zero point. We further investigate the limits of the algorithms in terms of conductance noise, symmetry, retention, and endurance which narrow down possible device material choices adequate for fast and robust in-memory deep neural network training. Analog in-memory computing recent hardware implementations focused mainly on accelerating inference deployment. In this work, to improve the training process, the authors propose algorithms for supervised training of deep neural networks on analog in-memory AI accelerator hardware.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Analog In-Memory Subthreshold Deep Neural Network Accelerator
    Fick, L.
    Blaauw, D.
    Sylvester, D.
    Skrzyniarz, S.
    Parikh, M.
    Fick, D.
    2017 IEEE CUSTOM INTEGRATED CIRCUITS CONFERENCE (CICC), 2017,
  • [2] FloatPIM: In-Memory Acceleration of Deep Neural Network Training with High Precision
    Imani, Mohsen
    Gupta, Saransh
    Kim, Yeseong
    Rosing, Tajana
    PROCEEDINGS OF THE 2019 46TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA '19), 2019, : 802 - 815
  • [3] TID Response of an Analog In-Memory Neural Network Accelerator
    Tolleson, B.
    Bennett, C.
    Xiao, T. Patrick
    Wilson, D.
    Short, J.
    Kim, J.
    Hughart, D. R.
    Gilbert, N.
    Agarwal, S.
    Barnaby, H. J.
    Marinella, M. J.
    2023 IEEE INTERNATIONAL RELIABILITY PHYSICS SYMPOSIUM, IRPS, 2023,
  • [4] Time-Multiplexed Flash ADC for Deep Neural Network Analog in-Memory Computing
    Boni, Andrea
    Frattini, Francesco
    Caselli, Michele
    2021 28TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS (IEEE ICECS 2021), 2021,
  • [5] Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
    Joshi, Vinay
    He, Wangxin
    Seo, Jae-sun
    Rajendran, Bipin
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [6] RIBoNN: Designing Robust In-Memory Binary Neural Network Accelerators
    Kundu, Shamik
    Malhotra, Akul
    Raha, Arnab
    Gupta, Sumeet K.
    Basu, Kanad
    2022 IEEE INTERNATIONAL TEST CONFERENCE (ITC), 2022, : 504 - 508
  • [7] OxRRAM-Based Analog in-Memory Computing for Deep Neural Network Inference: A Conductance Variability Study
    Doevenspeck, J.
    Degraeve, R.
    Fantini, A.
    Cosemans, S.
    Mallik, A.
    Debacker, P.
    Verkest, D.
    Lauwereins, R.
    Dehaene, W.
    IEEE TRANSACTIONS ON ELECTRON DEVICES, 2021, 68 (05) : 2301 - 2305
  • [8] Distributed Deep Learning Framework based on Shared Memory for Fast Deep Neural Network Training
    Lim, Eun-Ji
    Ahn, Shin-Young
    Park, Yoo-Mi
    Choi, Wan
    2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2018, : 1239 - 1242
  • [9] Noise tolerant ternary weight deep neural networks for analog in-memory inference
    Doevenspeck, Jonas
    Vrancx, Peter
    Laubeuf, Nathan
    Mallik, Arindam
    Debacker, Peter
    Verkest, Diederik
    Lauwereins, Rudy
    Dehaene, Wim
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks
    Mishra, Abhishek Kumar
    Das, Anup Kumar
    Kandasamy, Nagarajan
    ELECTRONICS, 2022, 11 (16)