Spatial-Temporal Hybrid Neural Network With Computing-in-Memory Architecture

被引:15
|
作者
Bai, Kangjun [1 ]
Liu, Lingjia [1 ]
Yi, Yang [1 ]
机构
[1] Virginia Tech, Bradley Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
基金
美国国家科学基金会;
关键词
Deep learning; computing-in-memory; spatial-temporal architecture; analog computing; delay-dynamical system; hybrid neural network; on-chip classification; ECHO STATE NETWORK; SYSTEMS; CHIP;
D O I
10.1109/TCSI.2021.3071956
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning (DL) has gained unprecedented success in many real-world applications. However, DL poses difficulties for efficient hardware implementation due to the needs of a complex gradient-based learning algorithm and the required high memory bandwidth for synaptic weight storage, especially in today's data-intensive environment. Computing-in-memory (CIM) strategies have emerged as an alternative for realizing energy-efficient neuromorphic applications in silicon, reducing resources and energy required for neural computations. In this work, we exploit a CIM-based spatial-temporal hybrid neural network (STHNN) with a unique learning algorithm. To be specific, we integrate both multilayer perceptron and recurrent-based delay-dynamical system, making the network becomes linear separable while processing information in both spatial and temporal domains, better yet, reducing the memory bandwidth and hardware overhead through the CIM architecture. The prototype fabricated in 1841nm CMOS process is built of fully-analog components, yielding an average on-chip classification accuracy up to 86.9% on handprinted alphabet characters with a power consumption of 33mW. Beyond that, through the handwritten digit database and the radio frequency fingerprinting dataset, software-based numerical evaluations offer 1.6-to-9.8x and 1.9-to-4.4x speedup, respectively, without significantly degrading its classification accuracy compared to the cutting-edge DL approaches.
引用
收藏
页码:2850 / 2862
页数:13
相关论文
共 50 条
  • [1] A hybrid precision low power computing-in-memory architecture for neural networks
    Xu, Rui
    Tao, Linfeng
    Wang, Tianqi
    Jin, Xi
    Li, Chenxia
    Li, Zhengda
    Ren, Jun
    [J]. MICROPROCESSORS AND MICROSYSTEMS, 2021, 80
  • [2] Memristor-based Deep Spiking Neural Network with a Computing-In-Memory Architecture
    Nowshin, Fabiha
    Yi, Yang
    [J]. PROCEEDINGS OF THE TWENTY THIRD INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2022), 2022, : 163 - 168
  • [3] A Hybrid RRAM-SRAM Computing-In-Memory Architecture for Deep Neural Network Inference-Training Edge Acceleration
    Feng, Jiayun
    Wang, Yu
    Hu, Xianwu
    Wen, Gan
    Wang, Zeming
    Lin, Yukai
    Wu, Danqing
    Ma, Zizhao
    Zhao, Liang
    Lu, Zhichao
    Xie, Yufeng
    [J]. 2021 SILICON NANOELECTRONICS WORKSHOP (SNW), 2021, : 65 - 66
  • [4] A dynamic neural network with local connections as spatial-temporal associative memory
    Kotov, V.B.
    [J]. Radiotekhnika i Elektronika, 2002, 47 (09): : 1083 - 1090
  • [6] Spatial-Temporal Graph Hybrid Neural Network for PV Power Forecast
    Son Tran Thanh
    Hieu Do Dinh
    Giang Nguyen Hoang Minh
    Thanh Nguyen Trong
    Tuyen Nguyen Duc
    [J]. 2024 THE 8TH INTERNATIONAL CONFERENCE ON GREEN ENERGY AND APPLICATIONS, ICGEA 2024, 2024, : 317 - 322
  • [7] A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture
    Jeong, Hoichang
    Kim, Seungbin
    Park, Keonhee
    Jung, Jueun
    Lee, Kyuho Jason
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (05) : 1739 - 1743
  • [8] Adaptive Hybrid Spatial-Temporal Graph Neural Network for Cellular Traffic Prediction
    Wang, Xing
    Yang, Kexin
    Wang, Zhendong
    Feng, Junlan
    Zhu, Lin
    Zhao, Juan
    Deng, Chao
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4026 - 4032
  • [9] SpinCIM: spin orbit torque memory for ternary neural networks based on the computing-in-memory architecture
    Luo, Lichuan
    Liu, Dijun
    Zhang, He
    Zhang, Youguang
    Bai, Jinyu
    Kang, Wang
    [J]. CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING, 2022, 4 (04) : 421 - 434
  • [10] SpinCIM: spin orbit torque memory for ternary neural networks based on the computing-in-memory architecture
    Lichuan Luo
    Dijun Liu
    He Zhang
    Youguang Zhang
    Jinyu Bai
    Wang Kang
    [J]. CCF Transactions on High Performance Computing, 2022, 4 : 421 - 434