A Design Framework for Hardware-Efficient Logarithmic Floating-Point Multipliers

被引:0
|
作者
Zhang T. [1 ]
Niu Z. [1 ]
Han J. [1 ]
机构
[1] Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB
来源
关键词
approximate computing; approximate multiplier; Artificial neural networks; Costs; error tolerance; Floating-point multiplier; Hardware; Image coding; JPEG compression; logarithmic multiplier; neural networks; Standards; Training; Transform coding;
D O I
10.1109/TETC.2024.3365650
中图分类号
学科分类号
摘要
The symbiotic use of logarithmic approximation in floating-point (FP) multiplication can significantly reduce the hardware complexity of a multiplier. However, it is difficult for a limited number of logarithmic FP multipliers (LFPMs) to fit in a specific error-tolerant application, such as neural networks (NNs) and digital signal processing, due to their unique error characteristics. This paper proposes a design framework for generating LFPMs. We consider two FP representation formats with different ranges of mantissas, the IEEE 754 Standard FP Format and the Nearest Power of Two FP Format. For both logarithm and anti-logarithm computation, the applicable regions of inputs are first evenly divided into several intervals, and then approximation methods with negative or positive errors are developed for each sub-region. By using piece-wise functions, different configurations of approximation methods throughout applicable regions are created, leading to LFPMs with various trade-offs between accuracy and hardware cost. The variety of error characteristics of LFPMs is discussed and the generic hardware implementation is illustrated. As case studies, two LFPM designs are presented and evaluated in applications of JPEG compression and NNs. They do not only increase the classification accuracy, but also achieve smaller PDPs compared to the exact FP multiplier, while being more accurate than a recent logarithmic FP design. IEEE
引用
收藏
页码:1 / 11
页数:10
相关论文
共 50 条
  • [21] A hardware error estimate for floating-point computations
    Lang, Tomas
    Bruguera, Javier D.
    ADVANCED SIGNAL PROCESSING ALGORITHMS, ARCHITECTURES, AND IMPLEMENTATIONS XVIII, 2008, 7074
  • [22] A HYBRID FLOATING-POINT LOGARITHMIC NUMBER SYSTEM PROCESSOR
    TAYLOR, FJ
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 1985, 32 (01): : 92 - 95
  • [23] Logarithmic number system and floating-point arithmetics on FPGA
    Matousek, R
    Tichy, M
    Pohl, Z
    Kadlec, J
    Softley, C
    Coleman, N
    FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS, PROCEEDINGS: RECONFIGURABLE COMPUTING IS GOING MAINSTREAM, 2002, 2438 : 627 - 636
  • [25] Design of Efficient Floating-Point Convolution Module for Embedded System
    Li, Jiao
    Zhou, Xinjing
    Wang, Binbin
    Shen, Huaming
    Ran, Feng
    ELECTRONICS, 2021, 10 (04) : 1 - 15
  • [26] Hardware Design of a Binary Integer Decimal-based Floating-point Adder
    Tsen, Charles
    Gonzalez-Navarro, Sonia
    Schulte, Michael
    2007 IEEE INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, VOLS, 1 AND 2, 2007, : 288 - +
  • [27] HEALM: Hardware-Efficient Approximate Logarithmic Multiplier with Reduced Error
    Yu, Shuyuan
    Tasnim, Maliha
    Tan, Sheldon X. -D.
    27TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2022, 2022, : 37 - 42
  • [28] A Hardware-Efficient BCH Encoder Design
    Hsieh, Jui-Hung
    Hung, King-Chu
    Li, Hong-chi
    2016 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN (ICCE-TW), 2016, : 367 - 368
  • [29] The Recursive Batch Least Squares filter An efficient RLS filter for floating-point hardware
    Monsurro, Pietro
    Trifiletti, Alessandro
    2017 EUROPEAN CONFERENCE ON CIRCUIT THEORY AND DESIGN (ECCTD), 2017,
  • [30] Minimally Biased Multipliers for Approximate Integer and Floating-Point Multiplication
    Saadat, Hassaan
    Bokhari, Haseeb
    Parameswaran, Sri
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 37 (11) : 2623 - 2635