Kernel Mapping Techniques for Deep Learning Neural Network Accelerators

被引:0
|
作者
Ozdemir, Sarp [1 ]
Khasawneh, Mohammad [1 ,2 ]
Rao, Smriti [1 ,3 ]
Madden, Patrick H. [1 ]
机构
[1] SUNY Binghamton CSD, Binghamton, NY 13901 USA
[2] MathWorks, Binghamton, NY USA
[3] Ixigo, Binghamton, NY USA
关键词
deep learning; machine learning; combinatorial optimization; kernel mapping; placement;
D O I
10.1145/3505170.3506730
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning applications are compute intensive and naturally parallel; this has spurred the development of new processor architectures tuned for the work load. In this paper, we consider structural differences between deep learning neural networks and more conventional circuits - highlighting how this impacts strategies for mapping neural network compute kernels onto available hardware. We present an efficient mapping approach based on dynamic programming, and also a method to establish performance bounds. We also propose an architectural approach to extend the practical life time of hardware accelerators, enabling the integration of a variety of heterogenous processors into a high performance system. Experimental results using benchmarks from a recent ISPD contest are also reported.
引用
收藏
页码:21 / 28
页数:8
相关论文
共 50 条
  • [41] Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models
    Zhou, Jingbo
    Zhang, Xinmiao
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4518 - 4527
  • [42] Adaptable Approximation Based on Bit Decomposition for Deep Neural Network Accelerators
    Soliman, Taha
    De la Parra, Cecilia
    Guntoro, Andre
    Wehn, Norbert
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [43] A Novel Heuristic Neuron Grouping Algorithm for Deep Neural Network Accelerators
    Cakin, Alperen
    Dilek, Selma
    Tosun, Suleyman
    Nacar, Furkan
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2025,
  • [44] Soft Error Mitigation for Deep Convolution Neural Network on FPGA Accelerators
    Li, Wenshuo
    Ge, Guangjun
    Guo, Kaiyuan
    Chen, Xiaoming
    Wei, Qi
    Gao, Zhen
    Wang, Yu
    Yang, Huazhong
    2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020), 2020, : 1 - 5
  • [45] DNNZip: Selective Layers Compression Technique in Deep Neural Network Accelerators
    Landhiri, Habiba
    Palesi, Maurizio
    Monteleone, Salvatore
    Patti, Davide
    Ascia, Giuseppe
    Lorandel, Jordane
    Bourdel, Emmanuelle
    Catania, Vincenzo
    2020 23RD EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2020), 2020, : 526 - 533
  • [46] Enhancing the Utilization of Processing Elements in Spatial Deep Neural Network Accelerators
    Asadikouhanjani, Mohammadreza
    Ko, Seok-Bum
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (09) : 1947 - 1951
  • [47] Quantization-Error-Robust Deep Neural Network for Embedded Accelerators
    Jung, Youngbeom
    Kim, Hyeonuk
    Choi, Yeongjae
    Kim, Lee-Sup
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (02) : 609 - 613
  • [48] Compute-in-Time for Deep Neural Network Accelerators: Challenges and Prospects
    Al Maharmeh, Hamza
    Sarhan, Nabil J.
    Hung, Chung-Chih
    Ismail, Mohammed
    Alhawari, Mohammad
    2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 990 - 993
  • [49] USING DATAFLOW TO OPTIMIZE ENERGY EFFICIENCY OF DEEP NEURAL NETWORK ACCELERATORS
    Chen, Yu-Hsin
    Emer, Joel
    Sze, Vivienne
    IEEE MICRO, 2017, 37 (03) : 12 - 21
  • [50] Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network
    Gong, Sung-Hyun
    Baek, Won-Kyung
    Jung, Hyung-Sup
    KOREAN JOURNAL OF REMOTE SENSING, 2022, 38 (06) : 1723 - 1735