An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN Inference on Mobile Devices

被引:11
|
作者
Elbtity, Mohammed [1 ]
Singh, Abhishek [2 ]
Reidy, Brendan [1 ]
Guo, Xiaochen [2 ]
Zand, Ramtin [1 ]
机构
[1] Univ South Carolina, Dept Comp Sci & Engn, Columbia, SC 29208 USA
[2] Lehigh Univ, Dept Elect & Comp Engn, Bethlehem, PA 18015 USA
关键词
in-memory computing; magnetic random access memory (MRAM); convolutional neural networks (CNNs); mixed-precision and mixed-signal inference;
D O I
10.1109/ISVLSI51109.2021.00043
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we develop an in-memory analog computing (IMAC) architecture realizing both synaptic behavior and activation functions within non-volatile memory arrays. Spin-orbit torque magnetoresistive random-access memory (SOT-MRAM) devices are leveraged to realize sigmoidal neurons as well as binarized synapses. First, it is shown the proposed IMAC architecture can be utilized to realize a multilayer perceptron (MLP) classifier achieving orders of magnitude performance improvement compared to previous mixed-signal and digital implementations. Next, a heterogeneous mixed-signal and mixed-precision CPU-IMAC architecture is proposed for convolutional neural networks (CNNs) inference on mobile processors, in which IMAC is designed as a co-processor to realize fully-connected (FC) layers whereas convolution layers are executed in CPU. Architecture-level analytical models are developed to evaluate the performance and energy consumption of the CPU-IMAC architecture. Simulation results exhibit 6.5% and 10% energy savings for CPU-IMAC based realizations of LeNet and VGG CNN models, for MNIST and CIFAR-10 pattern recognition tasks, respectively.
引用
收藏
页码:188 / 193
页数:6
相关论文
共 50 条
  • [1] An Embedded Co-processor Architecture for Energy-efficient Stream Computing
    Panda, Amrit
    Chatha, Karam S.
    2014 IEEE 12TH SYMPOSIUM ON EMBEDDED SYSTEMS FOR REAL-TIME MULTIMEDIA (ESTIMEDIA), 2014, : 60 - 69
  • [2] Can in-memory/analog accelerators be a silver bullet for energy-efficient inference?
    Deguchi, J.
    Miyashita, D.
    Maki, A.
    Sasaki, S.
    Nakata, K.
    Tachibana, F.
    2019 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2019,
  • [3] PIMCA: A Programmable In-Memory Computing Accelerator for Energy-Efficient DNN Inference
    Zhang, Bo
    Yin, Shihui
    Kim, Minkyu
    Saikia, Jyotishman
    Kwon, Soonwan
    Myung, Sungmeen
    Kim, Hyunsoo
    Kim, Sang Joon
    Seo, Jae-Sun
    Seok, Mingoo
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2023, 58 (05) : 1436 - 1449
  • [4] Energy-Efficient In-Memory Database Computing
    Lehner, Wolfgang
    DESIGN, AUTOMATION & TEST IN EUROPE, 2013, : 470 - 474
  • [5] An Energy-Efficient Near-Memory Computing Architecture for CNN Inference at Cache Level
    Nouripayam, Masoud
    Prieto, Arturo
    Kishorelal, Vignajeth Kuttuva
    Rodrigues, Joachim
    2021 28TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS (IEEE ICECS 2021), 2021,
  • [6] In-Memory Computing: Towards Energy-Efficient Artificial Intelligence
    Le Gallo, Manuel
    Sebastian, Abu
    Eleftheriou, Evangelos
    ERCIM NEWS, 2018, (115): : 44 - 45
  • [7] Efficient A* Co-processor for Reconfigurable Gaming Devices
    Nery, Alexandre S.
    Sena, Alexandre C.
    2018 17TH BRAZILIAN SYMPOSIUM ON COMPUTER GAMES AND DIGITAL ENTERTAINMENT (SBGAMES 2018), 2018, : 97 - 106
  • [8] A Research and Design of Reconfigurable CNN Co-Processor for Edge Computing
    Li W.
    Chen Y.
    Chen T.
    Nan L.
    Du Y.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (04): : 1499 - 1512
  • [9] An energy-efficient 10T SRAM in-memory computing macro for artificial intelligence edge processor
    Rajput, Anil Kumar
    Pattanaik, Manisha
    Kaushal, Gaurav
    Memories - Materials, Devices, Circuits and Systems, 2023, 5
  • [10] Energy-Efficient Computing: Datacenters, Mobile Devices, and Mobile Clouds
    Pedram, Massoud
    2018 NINTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2018,