Floating Gate Transistor-Based Accurate Digital In-Memory Computing for Deep Neural Networks

被引:1
|
作者
Han, Runze [1 ]
Huang, Peng [1 ]
Xiang, Yachen [1 ]
Hu, Hong [2 ]
Lin, Sheng [3 ]
Dong, Peiyan [3 ]
Shen, Wensheng [1 ]
Wang, Yanzhi [3 ]
Liu, Xiaoyan [1 ]
Kang, Jinfeng [1 ]
机构
[1] Peking Univ, Sch Integrated Circuits, Beijing 100871, Peoples R China
[2] GigaDevice Semicond Inc, Beijing 100094, Peoples R China
[3] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02461 USA
关键词
deep neural networks; flash memory; floating gate transistors; in-memory computing; parallel computing; MEMRISTOR; EFFICIENT; GAME; GO;
D O I
10.1002/aisy.202200127
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To improve the computing speed and energy efficiency of deep neural network (DNN) applications, in-memory computing with nonvolatile memory (NVM) is proposed to address the time-consuming and energy-hungry data shuttling issue. Herein, a digital in-memory computing method for convolution computing, which holds the key to DNNs, is proposed. Based on the proposed method, a floating gate transistor-based in-memory computing chip for accurate convolution computing with high parallelism is created. The proposed digital in-memory computing method can achieve the central processing unit (CPU)-equivalent precision with the same neural network architecture and parameters, different from the analogue or digital-analogue-mixed in-memory computing techniques. Based on the fabricated floating gate transistor-based in-memory computing chip, a hardware LeNet-5 neural network is built. The chip achieves 96.25% accuracy on the full Modified National Institute of Standards and Technology database, which is the same as the result computed by the CPU with the same neural network architecture and parameters.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] RRAM-Based In-Memory Computing for Embedded Deep Neural Networks
    Bankman, D.
    Messner, J.
    Gural, A.
    Murmann, B.
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1511 - 1515
  • [2] In-Memory Computing Based Hardware Accelerator Module for Deep Neural Networks
    Appukuttan, Allen
    Thomas, Emmanuel
    Nair, Harinandan R.
    Hemanth, S.
    Dhanaraj, K. J.
    Azeez, Maleeha Abdul
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [3] Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
    Joshi, Vinay
    He, Wangxin
    Seo, Jae-sun
    Rajendran, Bipin
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [4] Vesti: An In-Memory Computing Processor for Deep Neural Networks Acceleration
    Jiang, Zhewei
    Yin, Shihui
    Kim, Minkyu
    Gupta, Tushar
    Seok, Mingoo
    Seo, Jae-sun
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1516 - 1521
  • [5] MOL-Based In-Memory Computing of Binary Neural Networks
    Ali, Khaled Alhaj
    Baghdadi, Amer
    Dupraz, Elsa
    Leonardon, Mathieu
    Rizk, Mostafa
    Diguet, Jean-Philippe
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2022, 30 (07) : 869 - 880
  • [6] Ambipolar organic thin-film transistor-based nano-floating-gate nonvolatile memory
    Han, Jinhua
    Wang, Wei
    Ying, Jun
    Xie, Wenfa
    APPLIED PHYSICS LETTERS, 2014, 104 (01)
  • [7] An MRAM-based Deep In-Memory Architecture for Deep Neural Networks
    Patil, Ameya D.
    Hua, Haocheng
    Gonugondla, Sujan
    Kang, Mingu
    Shanbhag, Naresh R.
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2019,
  • [8] Vesti: Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks
    Yin, Shihui
    Jiang, Zhewei
    Kim, Minkyu
    Gupta, Tushar
    Seok, Mingoo
    Seo, Jae-Sun
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (01) : 48 - 61
  • [9] A Ternary-valued, Floating Gate Transistor-based Circuit Design Approach
    Abusultan, Monther
    Khatri, Sunil P.
    2016 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI), 2016, : 719 - 724
  • [10] Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks
    Meng, Jian
    Yang, Li
    Peng, Xiaochen
    Yu, Shimeng
    Fan, Deliang
    Seo, Jae-Sun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (05) : 1576 - 1580