Energy-Efficient Bayesian Inference Using Near-Memory Computation with Memristors

被引:0
|
作者
Turck, C. [1 ]
Harabi, K. -E. [1 ]
Hirtzlin, T. [2 ]
Vianello, E. [2 ]
Laurent, R. [3 ]
Droulez, J. [3 ]
Bessiere, P. [4 ]
Bocquet, M. [5 ]
Portal, J. -M. [5 ]
Querlioz, D. [1 ]
机构
[1] Univ Paris Saclay, CNRS, C2N, Palaiseau, France
[2] CEA, LETI, Grenoble, France
[3] Hawai Tech, Grenoble, France
[4] Sorbonne Univ, CNRS, ISIR, Paris, France
[5] Aix Marseille Univ, CNRS, IM2NP, Marseille, France
基金
欧洲研究理事会;
关键词
memristor; ASIC; Bayesian inference;
D O I
10.23919/DATE56975.2023.10137312
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Bayesian reasoning is a machine learning approach that provides explainable outputs and excels in small-data situations with high uncertainty. However, it requires intensive memory access and computation and is, therefore, too energy-intensive for extreme edge contexts. Near-memory computation with memristors (or RRAM) can greatly improve the energy efficiency of its computations. Here, we report two fabricated integrated circuits in a hybrid CMOS-memristor process, featuring each sixteen tiny memristor arrays and the associated near-memory logic for Bayesian inference. One circuit performs Bayesian inference using stochastic computing, and the other uses logarithmic computation; these two paradigms fit the area constraints of near-memory computing well. On-chip measurements show the viability of both approaches with respect to memristor imperfections. The two Bayesian machines also operated well at low supply voltages. We also designed scaled-up versions of the machines. Both scaled-up designs can perform a gesture recognition task using orders of magnitude less energy than a microcontroller unit. We also see that if an accuracy lower than 86.9% is sufficient for this sample task, stochastic computing consumes less energy than logarithmic computing; for higher accuracies, logarithmic computation is more energy-efficient. These results highlight the potential of memristor-based near-memory Bayesian computing, providing both accuracy and energy efficiency.
引用
收藏
页数:2
相关论文
共 50 条
  • [21] Toward Energy-Efficient Collaborative Inference Using Multisystem Approximations
    Das, Arghadip
    Ghosh, Soumendu Kumar
    Raha, Arnab
    Raghunathan, Vijay
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 17989 - 18004
  • [22] PIMCA: A Programmable In-Memory Computing Accelerator for Energy-Efficient DNN Inference
    Zhang, Bo
    Yin, Shihui
    Kim, Minkyu
    Saikia, Jyotishman
    Kwon, Soonwan
    Myung, Sungmeen
    Kim, Hyunsoo
    Kim, Sang Joon
    Seo, Jae-Sun
    Seok, Mingoo
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2023, 58 (05) : 1436 - 1449
  • [23] Can in-memory/analog accelerators be a silver bullet for energy-efficient inference?
    Deguchi, J.
    Miyashita, D.
    Maki, A.
    Sasaki, S.
    Nakata, K.
    Tachibana, F.
    2019 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2019,
  • [24] Energy-Efficient Inference Accelerator for Memory-Augmented Neural Networks on an FPGA
    Park, Seongsik
    Jang, Jaehee
    Kim, Seijoon
    Yoon, Sungroh
    2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 1587 - 1590
  • [25] HMComp: Extending Near-Memory Capacity using Compression in Hybrid Memory
    Shao, Qi
    Arelakis, Angelos
    Stenstrom, Per
    PROCEEDINGS OF THE 38TH ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ACM ICS 2024, 2024, : 74 - 84
  • [26] Energy-Efficient Neural Networks using Approximate Computation Reuse
    Jiao, Xun
    Akhlaghi, Vahideh
    Jiang, Yu
    Gupta, Rajesh K.
    PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 1223 - 1228
  • [27] Energy-efficient and Reliable Inference in Nonvolatile Memory under Extreme Operating Conditions
    Resch, Salonik
    Khatamifard, S. Karen
    Chowdhury, Zamshed I.
    Zabihi, Masoud
    Zhao, Zhengyang
    Cilasun, Husrev
    Wang, Jian-Ping
    Sapatnekar, Sachin S.
    Karpuzcu, Ulya R.
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (05)
  • [28] LOGNET: ENERGY-EFFICIENT NEURAL NETWORKS USING LOGARITHMIC COMPUTATION
    Lee, Edward H.
    Miyashita, Daisuke
    Chai, Elaina
    Murmann, Boris
    Wong, S. Simon
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5900 - 5904
  • [29] Algorithmic Issues in Energy-Efficient Computation
    Bampis, Evripidis
    DISCRETE OPTIMIZATION AND OPERATIONS RESEARCH, DOOR 2016, 2016, 9869 : 3 - 14
  • [30] Using Spin-Hall MTJs']Js to Build an Energy-Efficient In-memory Computation Platform
    Zabihi, Masoud
    Zhao, Zhengyang
    Mahendra, D. C.
    Chowdhury, Zamshed I.
    Resch, Salonik
    Peterson, Thomas
    Karpuzcu, Ulya R.
    Wang, Jian-Ping
    Sapatnekar, Sachin S.
    2018 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION CONTROL AND AUTOMATION (ICCUBEA), 2018,