Hierarchical Approximate Memory for Deep Neural Network Applications

被引:2
|
作者
Ha, Minho [1 ]
Hwang, Seokha [2 ]
Kim, Jeonghun [1 ]
Lee, Youngjoo [1 ]
Lee, Sunggu [1 ]
机构
[1] Pohang Univ Sci & Technol, Dept Elect Engn, Pohang 37673, South Korea
[2] Samsung Elect, Memory Business, Hwasung 18448, South Korea
关键词
approximate computing; deep neural network; low power memory systems;
D O I
10.1109/IEEECONF51394.2020.9443540
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Power consumed by a computer memory system can be significantly reduced if a certain level of error is permitted in the data stored in memory. Such an approximate memory approach is viable for use in applications developed using deep neural networks (DNNs) because such applications are typically error-resilient. In this paper, the use of hierarchical approximate memory for DNNs is studied and modeled. Although previous research has focused on approximate memory for specific memory technologies, this work proposes to consider approximate memory for the entire memory hierarchy of a computer system by considering the error budget for a given target application. This paper proposes a system model based on the error budget (amount by which the memory error rate can be permitted to rise to) for a target application and the power usage characteristics of the constituent memory technologies of a memory hierarchy. Using DNN case studies involving SRAM, DRAM, and NAND, this paper shows that the overall memory power consumption can be reduced by up to 43.38% by using the proposed model to optimally divide up the available error budget.
引用
收藏
页码:261 / 266
页数:6
相关论文
共 50 条
  • [41] Memory Saving Method for Enhanced Convolution of Deep Neural Network
    Li, Ling
    Tong, Yuqi
    Zhang, Hangyu
    Wan, Dayu
    2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 185 - 188
  • [42] Regularization of deep neural network using a multisample memory model
    Muhammad Tanveer
    Mohammad Yakoob Siyal
    Sheikh Faisal Rashid
    Neural Computing and Applications, 2024, 36 (36) : 23295 - 23307
  • [43] MEC Memory-efficient Convolution for Deep Neural Network
    Cho, Minsik
    Brand, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [44] Optimizing Memory Efficiency for Deep Convolutional Neural Network Accelerators
    Li, Xiaowei
    Li, Jiajun
    Yan, Guihai
    JOURNAL OF LOW POWER ELECTRONICS, 2018, 14 (04) : 496 - 507
  • [45] Learning Deep Neural Network Policies with Continuous Memory States
    Zhang, Marvin
    McCarthy, Zoe
    Finn, Chelsea
    Levine, Sergey
    Abbeel, Pieter
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 520 - 527
  • [46] Analog In-Memory Subthreshold Deep Neural Network Accelerator
    Fick, L.
    Blaauw, D.
    Sylvester, D.
    Skrzyniarz, S.
    Parikh, M.
    Fick, D.
    2017 IEEE CUSTOM INTEGRATED CIRCUITS CONFERENCE (CICC), 2017,
  • [47] Terahertz Spectroscopic Material Identification Using Approximate Entropy and Deep Neural Network
    Li, Yichao
    Shen, Xiaoping A.
    Ewing, Robert L.
    Li, Jia
    2017 IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE (NAECON), 2017, : 52 - 56
  • [48] LEARNING SUMMARY STATISTIC FOR APPROXIMATE BAYESIAN COMPUTATION VIA DEEP NEURAL NETWORK
    Jiang, Bai
    Wu, Tung-Yu
    Zheng, Charles
    Wong, Wing H.
    STATISTICA SINICA, 2017, 27 (04) : 1595 - 1618
  • [49] CAAM: Compressor-Based Adaptive Approximate Multiplier for Neural Network Applications
    Kumar, U. Anil
    Bharadwaj, S. Vignesh
    Pattaje, Avinash Bhat
    Nambi, Suresh
    Ahmed, Syed Ershad
    IEEE EMBEDDED SYSTEMS LETTERS, 2023, 15 (03) : 117 - 120
  • [50] Novel 4:2 Approximate Compressor Designs for Multimedia and Neural Network Applications
    Edavoor, Pranose J.
    Raveendran, Sithara
    Rahulkar, Amol D.
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (08)