Hierarchical Approximate Memory for Deep Neural Network Applications

被引:2
|
作者
Ha, Minho [1 ]
Hwang, Seokha [2 ]
Kim, Jeonghun [1 ]
Lee, Youngjoo [1 ]
Lee, Sunggu [1 ]
机构
[1] Pohang Univ Sci & Technol, Dept Elect Engn, Pohang 37673, South Korea
[2] Samsung Elect, Memory Business, Hwasung 18448, South Korea
关键词
approximate computing; deep neural network; low power memory systems;
D O I
10.1109/IEEECONF51394.2020.9443540
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Power consumed by a computer memory system can be significantly reduced if a certain level of error is permitted in the data stored in memory. Such an approximate memory approach is viable for use in applications developed using deep neural networks (DNNs) because such applications are typically error-resilient. In this paper, the use of hierarchical approximate memory for DNNs is studied and modeled. Although previous research has focused on approximate memory for specific memory technologies, this work proposes to consider approximate memory for the entire memory hierarchy of a computer system by considering the error budget for a given target application. This paper proposes a system model based on the error budget (amount by which the memory error rate can be permitted to rise to) for a target application and the power usage characteristics of the constituent memory technologies of a memory hierarchy. Using DNN case studies involving SRAM, DRAM, and NAND, this paper shows that the overall memory power consumption can be reduced by up to 43.38% by using the proposed model to optimally divide up the available error budget.
引用
收藏
页码:261 / 266
页数:6
相关论文
共 50 条
  • [21] A Hierarchical Fused Fuzzy Deep Neural Network for Data Classification
    Deng, Yue
    Ren, Zhiquan
    Kong, Youyong
    Bao, Feng
    Dai, Qionghai
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2017, 25 (04) : 1006 - 1012
  • [22] Deep memory and prediction neural network for video prediction
    Liu, Zhipeng
    Chai, Xiujuan
    Chen, Xilin
    NEUROCOMPUTING, 2019, 331 : 235 - 241
  • [23] A hierarchical fused fuzzy deep neural network with heterogeneous network embedding for recommendation
    Pham, Phu
    Nguyen, Loan T. T.
    Nguyen, Ngoc Thanh
    Kozma, Robert
    Vo, Bay
    INFORMATION SCIENCES, 2023, 620 : 105 - 124
  • [24] MEMORY REDUCTION METHOD FOR DEEP NEURAL NETWORK TRAINING
    Shirahata, Koichi
    Tomita, Yasumoto
    Ike, Atsushi
    2016 IEEE 26TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2016,
  • [25] A Survey on Memory Subsystems for Deep Neural Network Accelerators
    Asad, Arghavan
    Kaur, Rupinder
    Mohammadi, Farah
    FUTURE INTERNET, 2022, 14 (05):
  • [26] Learning exact enumeration and approximate estimation in deep neural network models
    Creatore, Celestino
    Sabathiel, Silvester
    Solstad, Trygve
    COGNITION, 2021, 215
  • [27] Improved approximate dispersion relation analysis using deep neural network
    Neelan, Arun Govind
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS- COMPUTER SYSTEMS THEORY, 2024, 9 (03) : 155 - 182
  • [28] APPROXIMATE ANALYSIS OF A HIERARCHICAL QUEUING NETWORK
    WILLEMAIN, TR
    OPERATIONS RESEARCH, 1974, 22 (03) : 522 - 544
  • [29] An approximate memory architecture for a reduction of refresh power consumption in deep learning applications
    Duy Thanh Nguyen
    Kim, Hyun
    Lee, Hyuk-Jae
    Chang, Ik-joon
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [30] A Deep Convolutional Spiking Neural Network for embedded applications
    Javanshir, Amirhossein
    Nguyen, Thanh Thi
    Mahmud, M. A. Parvez
    Kouzani, Abbas Z.
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2024, 13 (01) : 1 - 15