A Speculative Computation Approach for Energy-Efficient Deep Neural Network

被引:0
|
作者
Zheng, Rui-Xuan [1 ,2 ]
Ko, Ya-Cheng [1 ]
Liu, Tsung-Te [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Elect Engn, Taipei 10617, Taiwan
[2] Google Inc, New Taipei City 220, Taiwan
关键词
Computation reduction; deep neural network (DNN); energy-efficient processor; speculative computation; zero skipping;
D O I
10.1109/TCAD.2022.3183561
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have been widely used for data processing and analysis nowadays. Many computational techniques have been proposed to improve the energy efficiency of executing DNNs, which is critical for emerging smart edge applications. This article presents a speculative computation approach to improving the energy efficiency of DNN computations. The proposed approach employs the techniques of input channel partitioning and threshold-based negative masking to predict and eliminate unnecessary computations. Moreover, a systematic procedure of threshold optimization is proposed to achieve the best tradeoff between the energy and accuracy performance. Finally, an energy-efficient DNN processor architecture was designed and implemented to support the proposed speculative computation approach. The experimental results show that the proposed DNN processor with speculative computation can enhance the energy efficiency of the processor by 22.8%, with only 0.96% accuracy degradation and 1% implementation overhead.
引用
收藏
页码:795 / 806
页数:12
相关论文
共 50 条
  • [1] An Energy-Efficient Deep Neural Network Accelerator Design
    Jung, Jueun
    Lee, Kyuho Jason
    [J]. 2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 272 - 276
  • [2] An Approach of Binary Neural Network Energy-Efficient Implementation
    Gao, Jiabao
    Liu, Qingliang
    Lai, Jinmei
    [J]. ELECTRONICS, 2021, 10 (15)
  • [3] Challenges in Energy-Efficient Deep Neural Network Training with FPGA
    Tao, Yudong
    Ma, Rui
    Shyu, Mei-Ling
    Chen, Shu-Ching
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 1602 - 1611
  • [4] Hybrid Convolution Architecture for Energy-Efficient Deep Neural Network Processing
    Kim, Suchang
    Jo, Jihyuck
    Park, In-Cheol
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (05) : 2017 - 2029
  • [5] Computational Storage for an Energy-Efficient Deep Neural Network Training System
    Li, Shiju
    Tang, Kevin
    Lim, Jin
    Lee, Chul-Ho
    Kim, Jongryool
    [J]. EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 304 - 319
  • [6] DeepControl: Energy-Efficient Control of a Quadrotor using a Deep Neural Network
    Varshney, Pratyush
    Nagar, Gajendra
    Saha, Indranil
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 43 - 50
  • [7] Designing Energy-Efficient Topologies for Wireless Sensor Network: Neural Approach
    Patra, Chiranjib
    Roy, Anjan Guha
    Chattopadhyay, Samiran
    Bhaumik, Parama
    [J]. INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2010,
  • [8] Energy-efficient neural network chips approach human recognition capabilities
    Maass, Wolfgang
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (41) : 11387 - 11389
  • [9] Ascend: A Scalable and Energy-Efficient Deep Neural Network Accelerator With Photonic Interconnects
    Li, Yuan
    Wang, Ke
    Zheng, Hao
    Louri, Ahmed
    Karanth, Avinash
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (07) : 2730 - 2741
  • [10] Speculative Lookahead for Energy-Efficient Microprocessors
    Lin, Tay-Jyi
    Shyu, Ting-Yu
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2016, 24 (01) : 50 - 57