Floating Gate Transistor-Based Accurate Digital In-Memory Computing for Deep Neural Networks

被引:1
|
作者
Han, Runze [1 ]
Huang, Peng [1 ]
Xiang, Yachen [1 ]
Hu, Hong [2 ]
Lin, Sheng [3 ]
Dong, Peiyan [3 ]
Shen, Wensheng [1 ]
Wang, Yanzhi [3 ]
Liu, Xiaoyan [1 ]
Kang, Jinfeng [1 ]
机构
[1] Peking Univ, Sch Integrated Circuits, Beijing 100871, Peoples R China
[2] GigaDevice Semicond Inc, Beijing 100094, Peoples R China
[3] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02461 USA
关键词
deep neural networks; flash memory; floating gate transistors; in-memory computing; parallel computing; MEMRISTOR; EFFICIENT; GAME; GO;
D O I
10.1002/aisy.202200127
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To improve the computing speed and energy efficiency of deep neural network (DNN) applications, in-memory computing with nonvolatile memory (NVM) is proposed to address the time-consuming and energy-hungry data shuttling issue. Herein, a digital in-memory computing method for convolution computing, which holds the key to DNNs, is proposed. Based on the proposed method, a floating gate transistor-based in-memory computing chip for accurate convolution computing with high parallelism is created. The proposed digital in-memory computing method can achieve the central processing unit (CPU)-equivalent precision with the same neural network architecture and parameters, different from the analogue or digital-analogue-mixed in-memory computing techniques. Based on the fabricated floating gate transistor-based in-memory computing chip, a hardware LeNet-5 neural network is built. The chip achieves 96.25% accuracy on the full Modified National Institute of Standards and Technology database, which is the same as the result computed by the CPU with the same neural network architecture and parameters.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Realization of flexible in-memory computing in a van der Waals ferroelectric heterostructure tri-gate transistor
    Gao, Xinzhu
    Chen, Quan
    Qin, Qinggang
    Li, Liang
    Liu, Meizhuang
    Hao, Derek
    Li, Junjie
    Li, Jingbo
    Wang, Zhongchang
    Chen, Zuxin
    NANO RESEARCH, 2024, 17 (03) : 1886 - 1892
  • [32] An Enhanced Floating Gate Memory for the Online Training of Analog Neural Networks
    Gan, Lurong
    Wang, Chen
    Chen, Lin
    Zhu, Hao
    Sun, Qingqing
    Zhang, David Wei
    IEEE JOURNAL OF THE ELECTRON DEVICES SOCIETY, 2020, 8 (01) : 84 - 91
  • [33] Realization of flexible in-memory computing in a van der Waals ferroelectric heterostructure tri-gate transistor
    Xinzhu Gao
    Quan Chen
    Qinggang Qin
    Liang Li
    Meizhuang Liu
    Derek Hao
    Junjie Li
    Jingbo Li
    Zhongchang Wang
    Zuxin Chen
    Nano Research, 2024, 17 : 1886 - 1892
  • [34] In-Memory Computing based Accelerator for Transformer Networks for Long Sequences
    Laguna, Ann Franchesca
    Kazemi, Arman
    Niemier, Michael
    Hu, X. Sharon
    PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 1839 - 1844
  • [35] A large-scale in-memory computing for deep neural network with trained quantization
    Cheng, Yuan
    Wang, Chao
    Chen, Hai-Bao
    Yu, Hao
    INTEGRATION-THE VLSI JOURNAL, 2019, 69 : 345 - 355
  • [36] Multifunctional In-Memory Logics Based on a Dual-Gate Antiambipolar Transistor toward Non-von Neumann Computing Architecture
    Shingaya, Yoshitaka
    Iwasaki, Takuya
    Hayakawa, Ryoma
    Nakaharai, Shu
    Watanabe, Kenji
    Taniguchi, Takashi
    Aimi, Junko
    Wakayama, Yutaka
    ACS APPLIED MATERIALS & INTERFACES, 2024, 16 (26) : 33796 - 33805
  • [37] Impact of On-chip Interconnect on In-memory Acceleration of Deep Neural Networks
    Krishnan, Gokul
    Mandal, Sumit K.
    Chakrabarti, Chaitali
    Seo, Jae-Sun
    Ogras, Umit Y.
    Cao, Yu
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2022, 18 (02)
  • [38] TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks
    Jain, Shubham
    Gupta, Sumeet Kumar
    Raghunathan, Anand
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (07) : 1567 - 1577
  • [39] Superconducting in-memory computing architecture coupling with memristor synapses for binarized neural networks
    Xu, Zuyu
    Liu, Yu
    Wu, Zuheng
    Zhu, Yunlai
    Wang, Jun
    Yang, Fei
    Dai, Yuehua
    SUPERCONDUCTOR SCIENCE & TECHNOLOGY, 2024, 37 (06):
  • [40] Dual in-memory computing of matrix-vector multiplication for accelerating neural networks
    Wang, Shiqing
    Sun, Zhong
    DEVICE, 2024, 2 (12):