Dynamic Precision Multiplier For Deep Neural Network Accelerators

被引:1
|
作者
Ding, Chen [1 ]
Yuxiang, Huan [1 ]
Zheng, Lirong [1 ]
Zou, Zhuo [1 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
关键词
dynamic precision multiplier; Booth algorithm; mixed partial product selection structure;
D O I
10.1109/SOCC49529.2020.9524752
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The application of dynamic precision multipliers in the deep neural network accelerators can greatly improve system's data processing capacity under same memory bandwidth limitation. This paper presents a Dynamic Precision Multiplier (DPM) for deep learning accelerators to adapt to light-weight deep learning models with varied precision. The proposed DPM adopts Booth algorithm and Wallace Adder Tree to support parallel computation of signed/unsigned one 16-bit, two 8-bit or four 4-bit at run time. The DPM is further optimized with simplified partial product selection logic and mixed partial product selection structure techniques, reducing power cost for energy-efficient edge computing. The DPM is evaluated in both FPGA and ASIC flow, and the results show that 4-bit mode consumes the least energy among the three modes at 1.34pJ/word. It also saves nearly 22.38% and 232.17% of the power consumption under 16-bit and 8-bit mode respectively when comparing with previous similar designs.
引用
收藏
页码:180 / 184
页数:5
相关论文
共 50 条
  • [31] Optimizing deep learning inference on mobile devices with neural network accelerators
    曾惜
    Xu Yunlong
    Zhi Tian
    HighTechnologyLetters, 2019, 25 (04) : 417 - 425
  • [32] Quantization-Error-Robust Deep Neural Network for Embedded Accelerators
    Jung, Youngbeom
    Kim, Hyeonuk
    Choi, Yeongjae
    Kim, Lee-Sup
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (02) : 609 - 613
  • [33] Enhancing the Utilization of Processing Elements in Spatial Deep Neural Network Accelerators
    Asadikouhanjani, Mohammadreza
    Ko, Seok-Bum
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (09) : 1947 - 1951
  • [34] Compute-in-Time for Deep Neural Network Accelerators: Challenges and Prospects
    Al Maharmeh, Hamza
    Sarhan, Nabil J.
    Hung, Chung-Chih
    Ismail, Mohammed
    Alhawari, Mohammad
    2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 990 - 993
  • [35] USING DATAFLOW TO OPTIMIZE ENERGY EFFICIENCY OF DEEP NEURAL NETWORK ACCELERATORS
    Chen, Yu-Hsin
    Emer, Joel
    Sze, Vivienne
    IEEE MICRO, 2017, 37 (03) : 12 - 21
  • [36] A survey of neural network accelerators
    Li, Zhen
    Wang, Yuqing
    Zhi, Tian
    Chen, Tianshi
    FRONTIERS OF COMPUTER SCIENCE, 2017, 11 (05) : 746 - 761
  • [37] A survey of neural network accelerators
    Zhen Li
    Yuqing Wang
    Tian Zhi
    Tianshi Chen
    Frontiers of Computer Science, 2017, 11 : 746 - 761
  • [38] CONSTRUCTION OF MULTIPLIER BY NEURAL NETWORK
    TODA, N
    USUI, S
    IMAGES OF THE TWENTY-FIRST CENTURY, PTS 1-6, 1989, 11 : 2056 - 2057
  • [39] Fast Inner-Product Algorithms and Architectures for Deep Neural Network Accelerators
    Pogue, Trevor E.
    Nicolici, Nicola
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 495 - 509
  • [40] CANN: Curable Approximations for High-Performance Deep Neural Network Accelerators
    Hanif, Muhammad Abdullah
    Khalid, Faiq
    Shafique, Muhammad
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,