Accelerating Deep Neural Networks with Analog Memory Devices

被引:0
|
作者
Ambrogio, Stefano [1 ]
Narayanan, Pritish [1 ]
Tsai, Hsinyu [1 ]
Mackin, Charles [1 ]
Spoon, Katherine [1 ]
Chen, An [1 ]
Fasoli, Andrea [1 ]
Friz, Alexander [1 ]
Burr, Geoffrey W. [1 ]
机构
[1] IBM Res Almaden, 650 Harry Rd, San Jose, CA 95120 USA
关键词
NVM; PCM; AI; Accelerator; Analog computing; Training; Inference;
D O I
10.1109/aicas48895.2020.9073978
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Acceleration of training and inference of Deep Neural Networks (DNNs) with non-volatile memory (NVM) arrays, such as Phase-Change Memory (PCM), shows promising advantages in terms of energy efficiency and speed with respect to digital implementations using CPUs and GPUs. By leveraging a combination of PCM devices and CMOS circuits, high training accuracy can be achieved, leading to software-equivalent results on small and medium datasets. In addition, weights encoded with multiple PCM devices can lead to high speed and low-power inference, as shown here for Long-Short Term Memory (LSTM) networks.
引用
收藏
页码:149 / 152
页数:4
相关论文
共 50 条
  • [1] ACCELERATING DEEP NEURAL NETWORKS WITH ANALOG MEMORY DEVICES
    Burr, Geoffrey W.
    Ambrogio, Stefano
    Narayanan, Pritish
    Tsai, Hsinyu
    Mackin, Charles
    Chen, An
    [J]. 2019 CHINA SEMICONDUCTOR TECHNOLOGY INTERNATIONAL CONFERENCE (CSTIC), 2019,
  • [2] Accelerating Deep Neural Networks with Analog Memory Devices
    Spoon, Katie
    Ambrogio, Stefano
    Narayanan, Pritish
    Tsai, Hsinyu
    Mackin, Charles
    Chen, An
    Fasoli, Andrea
    Friz, Alexander
    Burr, Geoffrey W.
    [J]. 2020 IEEE INTERNATIONAL MEMORY WORKSHOP (IMW 2020), 2020, : 111 - 114
  • [3] Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach?
    Angizi, Shaahin
    He, Zhezhi
    Reis, Dayane
    Hu, Xiaobo Sharon
    Tsai, Wilman
    Lin, Shy Jay
    Fan, Deliang
    [J]. 2019 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2019), 2019, : 198 - 203
  • [4] Accelerating the characterization of dynamic DNA origami devices with deep neural networks
    Yuchen Wang
    Xin Jin
    Carlos Castro
    [J]. Scientific Reports, 13
  • [5] Accelerating the characterization of dynamic DNA origami devices with deep neural networks
    Wang, Yuchen
    Jin, Xin
    Castro, Carlos
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [6] Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices
    Spoon, Katie
    Tsai, Hsinyu
    Chen, An
    Rasch, Malte J.
    Ambrogio, Stefano
    Mackin, Charles
    Fasoli, Andrea
    Friz, Alexander M.
    Narayanan, Pritish
    Stanisavljevic, Milos
    Burr, Geoffrey W.
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2021, 15
  • [7] Analog-memory-based In-Memory-Computing Accelerators for Deep Neural Networks
    Tsai, Hsinyu
    [J]. 2024 IEEE WORKSHOP ON MICROELECTRONICS AND ELECTRON DEVICES, WMED, 2024, : XIII - XIII
  • [8] Accelerating Deep Neural Networks implementation: A survey
    Dhouibi, Meriam
    Ben Salem, Ahmed Karim
    Saidi, Afef
    Ben Saoud, Slim
    [J]. IET COMPUTERS AND DIGITAL TECHNIQUES, 2021, 15 (02): : 79 - 96
  • [9] Accelerating Deep Neural Networks Using FPGA
    Adel, Esraa
    Magdy, Rana
    Mohamed, Sara
    Mamdouh, Mona
    El Mandouh, Eman
    Mostafa, Hassan
    [J]. 2018 30TH INTERNATIONAL CONFERENCE ON MICROELECTRONICS (ICM), 2018, : 176 - 179
  • [10] Accelerating Sparse Deep Neural Networks on FPGAs
    Huang, Sitao
    Pearson, Carl
    Nagi, Rakesh
    Xiong, Jinjun
    Chen, Deming
    Hwu, Wen-mei
    [J]. 2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,