Variable-Precision Approximate Floating-Point Multiplier for Efficient Deep Learning Computation

被引:6
|
作者
Zhang, Hao [1 ]
Ko, Seok-Bum [2 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Peoples R China
[2] Univ Saskatchewan, Dept Elect & Comp Engn, Saskatoon, SK S7N 5A9, Canada
关键词
Deep learning; Encoding; Computer architecture; Computational efficiency; Circuits and systems; Adders; Hardware design languages; Approximate multiplier; posit format; deep learning computation; variable precision;
D O I
10.1109/TCSII.2022.3161005
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this brief, a variable-precision approximate floating-point multiplier is proposed for energy efficient deep learning computation. The proposed architecture supports approximate multiplication with BFloat16 format. As the input and output activations of deep learning models usually follow normal distribution, inspired by the posit format, for numbers with different values, different precisions can be applied to represent them. In the proposed architecture, posit encoding is used to change the level of approximation, and the precision of the computation is controlled by the value of product exponent. For large exponent, smaller precision multiplication is applied to mantissa and for small exponent, higher precision computation is applied. Truncation is used as approximate method in the proposed design while the number of bit positions to be truncated is controlled by the values of the product exponent. The proposed design can achieve 19% area reduction and 42% power reduction compared to the normal BFloat16 multiplier. When applying the proposed multiplier in deep learning computation, almost the same accuracy as that of normal BFloat16 multiplier can be achieved.
引用
收藏
页码:2503 / 2507
页数:5
相关论文
共 50 条
  • [1] Efficient Approximate Floating-Point Multiplier With Runtime Reconfigurable Frequency and Precision
    Li, Zhenhao
    Lu, Zhaojun
    Jia, Wei
    Yu, Runze
    Zhang, Haichun
    Zhou, Gefei
    Liu, Zhenglin
    Qu, Gang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (07) : 3533 - 3537
  • [2] Open-Source Variable-Precision Floating-Point Library for Major Commercial FPGAs
    Fang, Xin
    Leeser, Miriam
    ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2016, 9 (03)
  • [3] A Reconfigurable Approximate Floating-Point Multiplier with kNN
    Cho, Younggyun
    Lu, Mi
    2020 17TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC 2020), 2020, : 117 - 118
  • [4] Logarithm-approximate floating-point multiplier
    Rezaei, Samaneh
    Omidi, Reza
    Azarpeyvand, Ali
    MICROELECTRONICS JOURNAL, 2022, 127
  • [5] Efficient Implementation of IEEE Double Precision Floating-Point Multiplier on FPGA
    Jaiswal, Manish Kumar
    Chandrachoodan, Nitin
    IEEE REGION 10 COLLOQUIUM AND THIRD INTERNATIONAL CONFERENCE ON INDUSTRIAL AND INFORMATION SYSTEMS, VOLS 1 AND 2, 2008, : 334 - 337
  • [6] A dual precision IEEE floating-point multiplier
    Even, G
    Mueller, SM
    Seidel, PM
    INTEGRATION-THE VLSI JOURNAL, 2000, 29 (02) : 167 - 180
  • [7] FPGA implementation of an exact dot product and its application in variable-precision floating-point arithmetic
    Lei, Yuanwu
    Dou, Yong
    Dong, Yazhuo
    Zhou, Jie
    Xia, Fei
    JOURNAL OF SUPERCOMPUTING, 2013, 64 (02): : 580 - 605
  • [8] Approximate Floating-Point Multiplier based on Static Segmentation
    Di Meo, Gennaro
    Saggese, Gerardo
    Strollo, Antonio G. M.
    De Caro, Davide
    Petra, Nicola
    ELECTRONICS, 2022, 11 (19)
  • [9] A quadruple precision and dual double precision floating-point multiplier
    Akkas, A
    Schulte, MJ
    EUROMICRO SYMPOSIUM ON DIGITAL SYSTEM DESIGN, PROCEEDINGS, 2003, : 76 - 81
  • [10] FPGA implementation of an exact dot product and its application in variable-precision floating-point arithmetic
    Yuanwu Lei
    Yong Dou
    Yazhuo Dong
    Jie Zhou
    Fei Xia
    The Journal of Supercomputing, 2013, 64 : 580 - 605