Dynamic Precision Multiplier For Deep Neural Network Accelerators

被引:1
|
作者
Ding, Chen [1 ]
Yuxiang, Huan [1 ]
Zheng, Lirong [1 ]
Zou, Zhuo [1 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
关键词
dynamic precision multiplier; Booth algorithm; mixed partial product selection structure;
D O I
10.1109/SOCC49529.2020.9524752
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The application of dynamic precision multipliers in the deep neural network accelerators can greatly improve system's data processing capacity under same memory bandwidth limitation. This paper presents a Dynamic Precision Multiplier (DPM) for deep learning accelerators to adapt to light-weight deep learning models with varied precision. The proposed DPM adopts Booth algorithm and Wallace Adder Tree to support parallel computation of signed/unsigned one 16-bit, two 8-bit or four 4-bit at run time. The DPM is further optimized with simplified partial product selection logic and mixed partial product selection structure techniques, reducing power cost for energy-efficient edge computing. The DPM is evaluated in both FPGA and ASIC flow, and the results show that 4-bit mode consumes the least energy among the three modes at 1.34pJ/word. It also saves nearly 22.38% and 232.17% of the power consumption under 16-bit and 8-bit mode respectively when comparing with previous similar designs.
引用
收藏
页码:180 / 184
页数:5
相关论文
共 50 条
  • [1] A New Constant Coefficient Multiplier for Deep Neural Network Accelerators
    Manoj, B. R.
    Yaji, Jayashree S.
    Raghuram, S.
    2022 IEEE 3RD INTERNATIONAL CONFERENCE ON VLSI SYSTEMS, ARCHITECTURE, TECHNOLOGY AND APPLICATIONS, VLSI SATA, 2022,
  • [2] Review of ASIC accelerators for deep neural network
    Machupalli, Raju
    Hossain, Masum
    Mandal, Mrinal
    MICROPROCESSORS AND MICROSYSTEMS, 2022, 89
  • [3] Approximate Adders for Deep Neural Network Accelerators
    Raghuram, S.
    Shashank, N.
    2022 35TH INTERNATIONAL CONFERENCE ON VLSI DESIGN (VLSID 2022) HELD CONCURRENTLY WITH 2022 21ST INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS (ES 2022), 2022, : 210 - 215
  • [4] Research on High-Precision Stochastic Computing VLSI Structures for Deep Neural Network Accelerators
    WU Jingguo
    ZHU Jingwei
    XIONG Xiankui
    YAO Haidong
    WANG Chengchen
    CHEN Yun
    ZTE Communications, 2024, 22 (04) : 9 - 17
  • [5] Low-precision logarithmic arithmetic for neural network accelerators
    Christ, Maxime
    de Dinechin, Florent
    Petrot, Frederic
    2022 IEEE 33RD INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP), 2022, : 72 - 79
  • [6] A Survey on Memory Subsystems for Deep Neural Network Accelerators
    Asad, Arghavan
    Kaur, Rupinder
    Mohammadi, Farah
    FUTURE INTERNET, 2022, 14 (05):
  • [7] Speeding up Convolutional Neural Network Training with Dynamic Precision Scaling and Flexible Multiplier-Accumulator
    Na, Taesik
    Mukhopadhyay, Saibal
    ISLPED '16: PROCEEDINGS OF THE 2016 INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, 2016, : 58 - 63
  • [8] Exploiting Variable Precision Computation Array for Scalable Neural Network Accelerators
    Yang, Shaofei
    Liu, Longjun
    Li, Baoting
    Sun, Hongbin
    Zheng, Nanning
    2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020), 2020, : 315 - 319
  • [9] ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks
    Jonnalagadda, Aditya Anirudh
    Kumar, Uppugunduru Anil
    Thotli, Rishi
    Sardesai, Satvik
    Veeramachaneni, Sreehari
    Ahmed, Syed Ershad
    IEEE ACCESS, 2024, 12 : 31036 - 31046
  • [10] RNS Hardware Matrix Multiplier for High Precision Neural Network Acceleration
    Olsen, Eric B.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,