Fast and robust analog in-memory deep neural network training

被引:0
|
作者
Rasch, Malte J. [1 ,2 ]
Carta, Fabio [1 ]
Fagbohungbe, Omobayode [1 ]
Gokmen, Tayfun [1 ]
机构
[1] IBM Res, TJ Watson Res Ctr, Yorktown Hts, NY 10598 USA
[2] Sony AI, Zurich, Switzerland
关键词
DEVICES; CHIP;
D O I
10.1038/s41467-024-51221-z
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Analog in-memory computing is a promising future technology for efficiently accelerating deep learning networks. While using in-memory computing to accelerate the inference phase has been studied extensively, accelerating the training phase has received less attention, despite its arguably much larger compute demand to accelerate. While some analog in-memory training algorithms have been suggested, they either invoke significant amount of auxiliary digital compute-accumulating the gradient in digital floating point precision, limiting the potential speed-up-or suffer from the need for near perfectly programming reference conductance values to establish an algorithmic zero point. Here, we propose two improved algorithms for in-memory training, that retain the same fast runtime complexity while resolving the requirement of a precise zero point. We further investigate the limits of the algorithms in terms of conductance noise, symmetry, retention, and endurance which narrow down possible device material choices adequate for fast and robust in-memory deep neural network training. Analog in-memory computing recent hardware implementations focused mainly on accelerating inference deployment. In this work, to improve the training process, the authors propose algorithms for supervised training of deep neural networks on analog in-memory AI accelerator hardware.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] A large-scale in-memory computing for deep neural network with trained quantization
    Cheng, Yuan
    Wang, Chao
    Chen, Hai-Bao
    Yu, Hao
    INTEGRATION-THE VLSI JOURNAL, 2019, 69 : 345 - 355
  • [22] Tiny ci-SAR A/D Converter for Deep Neural Networks in Analog in-Memory Computation
    Caselli, Michele
    Bhattacharjee, Debjyoti
    Mallik, Arindam
    Debacker, Peter
    Verkest, Diederik
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 1823 - 1827
  • [23] Robust In-Memory Computation With Bayesian Analog Error Mitigating Codes
    Jha, Nilesh Kumar
    Guo, Huayan
    Lau, Vincent K. N.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 534 - 548
  • [24] PaRoT: A Practical Framework for Robust Deep Neural Network Training
    Ayers, Edward W.
    Eiras, Francisco
    Hawasly, Majd
    Whiteside, Iain
    NASA FORMAL METHODS (NFM 2020), 2020, 12229 : 63 - 84
  • [25] Ferroelectric FET Analog Synapse for Acceleration of Deep Neural Network Training
    Jerry, Matthew
    Chen, Pai-Yu
    Zhang, Jianchi
    Sharma, Pankaj
    Ni, Kai
    Yu, Shimeng
    Datta, Suman
    2017 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2017,
  • [26] Deep In-Memory Architectures in SRAM: An Analog Approach to Approximate Computing
    Kang, Mingu
    Gonugondla, Sujan K.
    Shanbhag, Naresh R.
    PROCEEDINGS OF THE IEEE, 2020, 108 (12) : 2251 - 2275
  • [27] Filamentary TaOx/HfO2 ReRAM Devices for Neural Networks Training with Analog In-Memory Computing
    Stecconi, Tommaso
    Guido, Roberto
    Berchialla, Luca
    La Porta, Antonio
    Weiss, Jonas
    Popoff, Youri
    Halter, Mattia
    Sousa, Marilyne
    Horst, Folkert
    Davila, Diana
    Drechsler, Ute
    Dittmann, Regina
    Offrein, Bert Jan
    Bragaglia, Valeria
    ADVANCED ELECTRONIC MATERIALS, 2022, 8 (10)
  • [28] XNOR-BSNN: In-Memory Computing Model for Deep Binarized Spiking Neural Network
    Nguyen, Van-Tinh
    Quang-Kien Trinh
    Zhang, Renyuan
    Nakashima, Yasuhiko
    2021 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE BIG DATA AND INTELLIGENT SYSTEMS (HPBD&IS), 2021, : 17 - 21
  • [29] IMC: Energy-Efficient In-Memory Convolver for Accelerating Binarized Deep Neural Network
    Angizi, Shaahin
    Fan, Deliang
    PROCEEDINGS OF NEUROMORPHIC COMPUTING SYMPOSIUM (NCS 2017), 2017,
  • [30] Fast-OverlaPIM: A Fast Overlap-Driven Mapping Framework for Processing In-Memory Neural Network Acceleration
    Wang, Xuan
    Zhou, Minxuan
    Rosing, Tajana
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2025, 44 (01) : 130 - 143