Resistive Memory-Based In-Memory Computing: From Device and Large-Scale Integration System Perspectives

被引:64
|
作者
Yan, Bonan [1 ]
Li, Bing [1 ]
Qiao, Ximing [1 ]
Xue, Cheng-Xin [2 ]
Chang, Meng-Fan [2 ]
Chen, Yiran [1 ]
Li, Hai [1 ]
机构
[1] Duke Univ, Dept Elect & Comp Engn, 100 Sci Dr, Durham, NC 27708 USA
[2] Natl Tsing Hua Univ, Dept Elect Engn, Delta Bldg 101,Sect 2,Kuang Fu Rd, Hsinchu 30013, Taiwan
基金
美国国家科学基金会;
关键词
accelerators; in-memory computing; neural networks; process-in-memory; resistive memory; NONVOLATILE MEMORY; LOGIC OPERATIONS; NEURAL-NETWORKS; RRAM; MECHANISM; SYNAPSE; ARRAY;
D O I
10.1002/aisy.201900068
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In-memory computing is a computing scheme that integrates data storage and arithmetic computation functions. Resistive random access memory (RRAM) arrays with innovative peripheral circuitry provide the capability of performing vector-matrix multiplication beyond the basic Boolean logic. With such a memory-computation duality, RRAM-based in-memory computing enables an efficient hardware solution for matrix-multiplication-dependent neural networks and related applications. Herein, the recent development of RRAM nanoscale devices and the parallel progress on circuit and microarchitecture layers are discussed. Well suited for analog synapse and neuron implementation, RRAM device properties and characteristics are emphasized herein. 3D-stackable RRAM and on-chip training are introduced in large-scale integration. The circuit design and system organization of RRAM-based in-memory computing are essential to breaking the von Neumann bottleneck. These outcomes illuminate the way for the large-scale implementation of ultra-low-power and dense neural network accelerators.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Memristive-based in-memory computing: from device to large-scale CMOS integration
    Quesada, E. Perez-Bosch
    Perez, E.
    Mahadevaiah, M. Kalishettyhalli
    Wenger, C.
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2021, 1 (02):
  • [2] A large-scale in-memory computing for deep neural network with trained quantization
    Cheng, Yuan
    Wang, Chao
    Chen, Hai-Bao
    Yu, Hao
    INTEGRATION-THE VLSI JOURNAL, 2019, 69 : 345 - 355
  • [3] Spangle: A Distributed In-Memory Processing System for Large-Scale Arrays
    Kim, Sangchul
    Kim, Bogyeong
    Moon, Bongki
    2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021), 2021, : 1799 - 1810
  • [4] Memory-based Data Management for Large-scale Distributed Rendering
    Zheng, Ran
    Jia, Jinli
    Jin, Hai
    Lv, Xinqiao
    Yang, Shuai
    2016 IEEE 13TH INTERNATIONAL CONFERENCE ON E-BUSINESS ENGINEERING (ICEBE), 2016, : 123 - 128
  • [5] Programmable Stateful In-Memory Computing Paradigm via a Single Resistive Device
    Kang, Wang
    Zhang, He
    Ouyang, Peng
    Zhang, Youguang
    Zhao, Weisheng
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 613 - 616
  • [6] Empirical Guide to Use of Persistent Memory for Large-Scale In-Memory Graph Analysis
    Bae, Hanyeoreum
    Kwon, Miryeong
    Gouk, Donghyun
    Han, Sanghyun
    Koh, Sungjoon
    Lee, Changrim
    Park, Dongchul
    Jung, Myoungsoo
    2021 IEEE 39TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2021), 2021, : 316 - 320
  • [7] Physics-based modeling approaches of resistive switching devices for memory and in-memory computing applications
    Ielmini, D.
    Milo, V.
    JOURNAL OF COMPUTATIONAL ELECTRONICS, 2017, 16 (04) : 1121 - 1143
  • [8] Physics-based modeling approaches of resistive switching devices for memory and in-memory computing applications
    D. Ielmini
    V. Milo
    Journal of Computational Electronics, 2017, 16 : 1121 - 1143
  • [9] Resistive-RAM-Based In-Memory Computing for Neural Network: A Review
    Chen, Weijian
    Qi, Zhi
    Akhtar, Zahid
    Siddique, Kamran
    ELECTRONICS, 2022, 11 (22)
  • [10] Rapid learning with phase-change memory-based in-memory computing through learning-to-learn
    Thomas Ortner
    Horst Petschenig
    Athanasios Vasilopoulos
    Roland Renner
    Špela Brglez
    Thomas Limbacher
    Enrique Piñero
    Alejandro Linares-Barranco
    Angeliki Pantazi
    Robert Legenstein
    Nature Communications, 16 (1)