Bit-Beading: Stringing bit-level MAC results for Accelerating Neural Networks

被引:0
|
作者
Anwar, Zeeshan [1 ]
Longchar, Imlijungla [1 ]
Kapoor, Hemangee K. [1 ]
机构
[1] IIT Guwahati, Dept Comp Sci & Engn, Gauhati, India
关键词
MAC Unit; Reconfigurable Arithmetic; Booth's algorithm; CNN; DNN; Neural Network; Low Precision;
D O I
10.1109/VLSID60093.2024.00042
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
On account of the rising demands of AI applications and the consequent need for improvement, researchers are designing better and faster algorithms and architectures. Convolutional Neural Networks (CNN) are neural networks that have become ubiquitous and find applications in the domain of computer vision. Inference in CNN involves convolution operation, which mainly consists of a massive number of matrix multiplications. Optimising these multiplications will enable faster execution of the inference tasks. Fixed precision during inference takes the same time to compute for both higher and lower precision. It is noted in the literature that lowering the precision to some extent does not affect the inference accuracy. In this paper, we propose a reconfigurable multiplier that can handle the precision of different magnitudes. We design Bit-Bead, a basic unit based on Booth's algorithm, where several bit-beads are composed (i.e., stringed) to form a multiplier of the required precision. The reconfigurable multipliers need low latency due to lower precision and also enable performing multiple low-precision computations. Our proposal shows considerable performance improvement compared to the baseline and existing designs.
引用
收藏
页码:216 / 221
页数:6
相关论文
共 50 条
  • [41] BAT: The bit-level analysis tools - (Tool paper)
    Manolios, Panagiotis
    Srinivasan, Sudarshan K.
    Vroon, Daron
    COMPUTER AIDED VERIFICATION, PROCEEDINGS, 2007, 4590 : 303 - +
  • [42] Continual Learning via Bit-Level Information Preserving
    Shi, Yujun
    Yuan, Li
    Chen, Yunpeng
    Feng, Jiashi
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16669 - 16678
  • [43] AN IMPROVED BIT-LEVEL SYSTOLIC ARCHITECTURE FOR IIR FILTERING
    KNOWLES, SC
    MCWHIRTER, JG
    SYSTOLIC ARRAY PROCESSORS, 1989, : 205 - 214
  • [44] Optimize Dataflow of DNN on Bit-Level Composable Architecture
    Gao, Hanyuan
    Gong, Lei
    Wang, Teng
    Computer Engineering and Applications, 60 (18): : 147 - 157
  • [45] A New Bit-Level Permutation Image Encryption Algorithm
    Diaconu, Adrian-Viorel
    Ionescu, Valeriu
    Iana, Gabriel
    Manuel Lopez-Guede, Jose
    2016 INTERNATIONAL CONFERENCE ON COMMUNICATIONS (COMM 2016), 2016, : 411 - 416
  • [46] Bit-level allocation of multiple-precision specifications
    Molina, MC
    Mendías, JM
    Hermida, R
    EUROMICRO SYMPOSIUM ON DIGITAL SYSTEM DESIGN, PROCEEDINGS: ARCHITECTURES, METHODS AND TOOLS, 2002, : 385 - 392
  • [47] A BIT-LEVEL SYSTOLIC ARRAY FOR DIGITAL CONTOUR SMOOTHING
    PETKOV, N
    PARALLEL COMPUTING, 1989, 12 (03) : 301 - 313
  • [48] A TECHNIQUE FOR THE DESIGN OF SYSTOLIC ARRAYS WITH BIT-LEVEL PIPELINING
    MADAN, BB
    PARKER, SR
    ZUBAIR, M
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 1987, 6 (02) : 139 - 151
  • [49] Energy-Efficient Neural Network Acceleration in the Presence of Bit-Level Memory Errors
    Kim, Sung
    Howe, Patrick
    Moreau, Thierry
    Alaghi, Armin
    Ceze, Luis
    Sathe, Visvesh S.
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2018, 65 (12) : 4285 - 4298
  • [50] Bit-level based secret sharing for image encryption
    Lukac, R
    Plataniotis, KN
    PATTERN RECOGNITION, 2005, 38 (05) : 767 - 772