Energy-Efficient Bayesian Inference Using Near-Memory Computation with Memristors

被引:0
|
作者
Turck, C. [1 ]
Harabi, K. -E. [1 ]
Hirtzlin, T. [2 ]
Vianello, E. [2 ]
Laurent, R. [3 ]
Droulez, J. [3 ]
Bessiere, P. [4 ]
Bocquet, M. [5 ]
Portal, J. -M. [5 ]
Querlioz, D. [1 ]
机构
[1] Univ Paris Saclay, CNRS, C2N, Palaiseau, France
[2] CEA, LETI, Grenoble, France
[3] Hawai Tech, Grenoble, France
[4] Sorbonne Univ, CNRS, ISIR, Paris, France
[5] Aix Marseille Univ, CNRS, IM2NP, Marseille, France
基金
欧洲研究理事会;
关键词
memristor; ASIC; Bayesian inference;
D O I
10.23919/DATE56975.2023.10137312
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Bayesian reasoning is a machine learning approach that provides explainable outputs and excels in small-data situations with high uncertainty. However, it requires intensive memory access and computation and is, therefore, too energy-intensive for extreme edge contexts. Near-memory computation with memristors (or RRAM) can greatly improve the energy efficiency of its computations. Here, we report two fabricated integrated circuits in a hybrid CMOS-memristor process, featuring each sixteen tiny memristor arrays and the associated near-memory logic for Bayesian inference. One circuit performs Bayesian inference using stochastic computing, and the other uses logarithmic computation; these two paradigms fit the area constraints of near-memory computing well. On-chip measurements show the viability of both approaches with respect to memristor imperfections. The two Bayesian machines also operated well at low supply voltages. We also designed scaled-up versions of the machines. Both scaled-up designs can perform a gesture recognition task using orders of magnitude less energy than a microcontroller unit. We also see that if an accuracy lower than 86.9% is sufficient for this sample task, stochastic computing consumes less energy than logarithmic computing; for higher accuracies, logarithmic computation is more energy-efficient. These results highlight the potential of memristor-based near-memory Bayesian computing, providing both accuracy and energy efficiency.
引用
收藏
页数:2
相关论文
共 50 条
  • [1] An Energy-Efficient Near-Memory Computing Architecture for CNN Inference at Cache Level
    Nouripayam, Masoud
    Prieto, Arturo
    Kishorelal, Vignajeth Kuttuva
    Rodrigues, Joachim
    2021 28TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS (IEEE ICECS 2021), 2021,
  • [2] Energy-Efficient Bayesian Inference Using Bitstream Computing
    Khoram, Soroosh
    Daruwalla, Kyle
    Lipasti, Mikko
    IEEE COMPUTER ARCHITECTURE LETTERS, 2023, 22 (01) : 37 - 40
  • [3] Via-switch FPGA with transistor-free programmability enabling energy-efficient near-memory parallel computation
    Hashimoto, Masanori
    Bai, Xu
    Banno, Naoki
    Tada, Munehiro
    Sakamoto, Toshitsugu
    Yu, Jaehoon
    Doi, Ryutaro
    Onodera, Hidetoshi
    Imagawa, Takashi
    Ochi, Hiroyuki
    Wakabayashi, Kazutoshi
    Mitsuyama, Yukio
    Sugibayashi, Tadahiko
    JAPANESE JOURNAL OF APPLIED PHYSICS, 2022, 61 (SM)
  • [4] A DRAM-based Near-Memory Architecture for Accelerated and Energy-Efficient Execution of Transformers
    Singh, Gian
    Vrudhula, Sarma
    PROCEEDING OF THE GREAT LAKES SYMPOSIUM ON VLSI 2024, GLSVLSI 2024, 2024, : 57 - 62
  • [5] Computation and memory optimized spectral domain convolutional neural network for throughput and energy-efficient inference
    Rizvi, Shahriyar Masud
    Ab Rahman, Ab Al-Hadi
    Sheikh, Usman Ullah
    Fuad, Kazi Ahmed Asif
    Shehzad, Hafiz Muhammad Faisal
    APPLIED INTELLIGENCE, 2023, 53 (04) : 4499 - 4523
  • [6] Computation and memory optimized spectral domain convolutional neural network for throughput and energy-efficient inference
    Shahriyar Masud Rizvi
    Ab Al-Hadi Ab Rahman
    Usman Ullah Sheikh
    Kazi Ahmed Asif Fuad
    Hafiz Muhammad Faisal Shehzad
    Applied Intelligence, 2023, 53 : 4499 - 4523
  • [7] HeNCoG: A Heterogeneous Near-memory Computing Architecture for Energy Efficient GCN Acceleration
    Hwang, Seung-Eon
    Song, Duyeong
    Park, Jongsun
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [8] Signatures of Bayesian inference emerge from energy-efficient synapses
    Malkin, James
    O'Donnell, Cian
    Houghton, Conor J.
    Aitchison, Laurence
    ELIFE, 2024, 12
  • [9] Trends and Opportunities for SRAM Based In-Memory and Near-Memory Computation
    Srinivasa, Srivatsa
    Ramanathan, Akshay Krishna
    Sundaram, Jainaveen
    Kurian, Dileep
    Gopal, Srinivasan
    Jain, Nilesh
    Srinivasan, Anuradha
    Iyer, Ravi
    Narayanan, Vijaykrishnan
    Karnik, Tanay
    PROCEEDINGS OF THE 2021 TWENTY SECOND INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2021), 2021, : 547 - 552
  • [10] Near-memory caching for improved energy consumption
    AbouGhazaleh, Nevine
    Childers, Bruce R.
    Mosse, Daniel
    Melhem, Rami G.
    IEEE TRANSACTIONS ON COMPUTERS, 2007, 56 (11) : 1441 - 1455