PIE: A Pipeline Energy-efficient Accelerator for Inference Process in Deep Neural Networks

被引:0
|
作者
Zhao, Yangyang [1 ]
Yu, Qi [1 ]
Zhou, Xuda [1 ]
Zhou, Xuehai [1 ]
Wang, Chao [1 ]
Li, Xi [1 ]
机构
[1] USTC, Dept Comp Sci & Technol, Hefei, Peoples R China
基金
美国国家科学基金会;
关键词
accelerator; deep neural networks; FPGA; pipeline; inference;
D O I
10.1109/ICPADS.2016.139
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
It has been a new research hot topic to speed up the inference process of deep neural networks (DNNs) by hardware accelerators based on field programmable gate arrays (FPGAs). Because of the layer-wise structure and data dependency between layers, previous studies commonly focus on the inherent parallelism of a single layer to reduce the computation time but neglect the parallelism between layers. In this paper, we propose a pipeline energy-efficient accelerator named PIE to accelerate the DNN inference computation by pipelining two adjacent layers. Through realizing two adjacent layers in different calculation orders, the data dependency between layers can be weakened. As soon as a layer produces an output, the next layer reads the output as an input and starts the parallel computation immediately in another calculation method. In such a way, computations between adjacent layers are pipelined. We conduct our experiments on a Zedboard development kit using Xilinx Zynq-7000 FPGA, compared with Intel Core i7 4.0GHz CPU and NVIDIA K40C GPU. Experimental results indicate that PIE is 4.82x faster than CPU and can reduce the energy consumptions of CPU and GPU by 355.35x and 12.02x respectively. Besides, compared with the none-pipelined method that layers are processed in serial, PIE improves the performance by nearly 50%.
引用
收藏
页码:1067 / 1074
页数:8
相关论文
共 50 条
  • [31] Selective Pruning of Sparsity-Supported Energy-Efficient Accelerator for Convolutional Neural Networks
    Liu, Chia-Chi
    Zhang, Xuezhi
    Wey, I-Chyn
    Teo, T. Hui
    [J]. 2023 IEEE 16TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP, MCSOC, 2023, : 454 - 461
  • [32] Energy-efficient cooperative inference via adaptive deep neural network splitting at the edge
    Labriji, Ibtissam
    Merluzzi, Mattia
    Airod, Fatima Ezzahra
    Strinati, Emilio Calvanese
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1712 - 1717
  • [33] Energy-efficient Amortized Inference with Cascaded Deep Classifiers
    Guan, Jiaqi
    Liu, Yang
    Liu, Qiang
    Peng, Jian
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2184 - 2190
  • [34] Energy-efficient deep learning inference on edge devices
    Daghero, Francesco
    Pagliari, Daniele Jahier
    Poncino, Massimo
    [J]. HARDWARE ACCELERATOR SYSTEMS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2021, 122 : 247 - 301
  • [35] aCortex: An Energy-Efficient Multipurpose Mixed-Signal Inference Accelerator
    Bavandpour, Mohammad
    Mahmoodi, Mohammad R.
    Strukov, Dmitri B.
    [J]. IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS, 2020, 6 (01): : 98 - 106
  • [36] Quantized Deep Neural Networks for Energy Efficient Hardware-based Inference
    Ding, Ruizhou
    Liu, Zeye
    Blanton, R. D.
    Marculescu, Diana
    [J]. 2018 23RD ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2018, : 1 - 8
  • [37] UNPU: An Energy-Efficient Deep Neural Network Accelerator With Fully Variable Weight Bit Precision
    Lee, Jinmook
    Kim, Changhyeon
    Kang, Sanghoon
    Shin, Dongjoo
    Kim, Sangyeob
    Yoo, Hoi-Jun
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2019, 54 (01) : 173 - 185
  • [38] An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices
    Choi, Seungkyu
    Sim, Jaehyeong
    Kang, Myeonggu
    Choi, Yeongjae
    Kim, Hyeonuk
    Kim, Lee-Sup
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (10) : 2691 - 2702
  • [39] Boosted Spin Channel Networks for Energy-Efficient Inference
    Patil, Ameya D.
    Manipatruni, Sasikanth
    Nikonov, Dmitri E.
    Young, Ian A.
    Shanbhag, Naresh R.
    [J]. IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS, 2019, 5 (01): : 34 - 42
  • [40] An Energy-Efficient Hardware Accelerator for Hierarchical Deep Reinforcement Learning
    Shiri, Aidin
    Prakash, Bharat
    Mazumder, Arnab Neelim
    Waytowich, Nicholas R.
    Oates, Tim
    Mohsenin, Tinoosh
    [J]. 2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,