The Effects of Approximate Multiplication on Convolutional Neural Networks

被引:33
|
作者
Kim, Min Soo [1 ]
Del Barrio, Alberto A. [2 ]
Kim, Hyunjin [3 ]
Bagherzadeh, Nader [4 ]
机构
[1] NGD Syst, Irvine, CA 92618 USA
[2] Univ Complutense Madrid, Dept Comp Architecture & Automat, Madrid 28040, Spain
[3] Dankook Univ, Sch Elect & Elect Engn, Yongin 16890, Gyeonggi Do, South Korea
[4] Univ Calif Irvine, Dept Elect Engn & Comp Sci, Irvine, CA 92697 USA
关键词
Machine learning; computer vision; object recognition; arithmetic and logic units; low-power design;
D O I
10.1109/TETC.2021.3050989
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article analyzes the effects of approximate multiplication when performing inferences on deep convolutional neural networks (CNNs). The approximate multiplication can reduce the cost of the underlying circuits so that CNN inferences can be performed more efficiently in hardware accelerators. The study identifies the critical factors in the convolution, fully-connected, and batch normalization layers that allow more accurate CNN predictions despite the errors from approximate multiplication. The same factors also provide an arithmetic explanation of why bfloat16 multiplication performs well on CNNs. The experiments are performed with recognized network architectures to show that the approximate multipliers can produce predictions that are nearly as accurate as the FP32 references, without additional training. For example, the ResNet and Inception-v4 models with Mitch-w6 multiplication produces Top-5 errors that are within 0.2 percent compared to the FP32 references. A brief cost comparison of Mitch-w6 against bfloat16 is presented where a MAC operation saves up to 80 percent of energy compared to the bfloat16 arithmetic. The most far-reaching contribution of this article is the analytical justification that multiplications can be approximated while additions need to be exact in CNN MAC operations.
引用
收藏
页码:904 / 916
页数:13
相关论文
共 50 条
  • [1] Low-power Implementation of Mitchell's Approximate Logarithmic Multiplication for Convolutional Neural Networks
    Kim, Min Soo
    Del Barrio, Alberto A.
    Hermida, Roman
    Bagherzadeh, Nader
    [J]. 2018 23RD ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2018, : 617 - 622
  • [2] A Cost-Efficient Approximate Dynamic Ranged Multiplication and Approximation-Aware Training on Convolutional Neural Networks
    Kim, Hyunjin
    Del Barrio, Alberto A.
    [J]. IEEE ACCESS, 2021, 9 : 135513 - 135525
  • [3] Application of Approximate Matrix Multiplication to Neural Networks and Distributed SLAM
    Plancher, Brian
    Brumar, Camelia D.
    Brumar, Iulian
    Pentecost, Lillian
    Rama, Saketh
    Brooks, David
    [J]. 2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
  • [4] Low-Complexity Approximate Convolutional Neural Networks
    Cintra, Renato J.
    Duffner, Stefan
    Garcia, Christophe
    Leite, Andre
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (12) : 5981 - 5992
  • [5] Convolutional Neural Networks as Summary Statistics for Approximate Bayesian Computation
    Akesson, Mattias
    Singh, Prashant
    Wrede, Fredrik
    Hellander, Andreas
    [J]. IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2022, 19 (06) : 3353 - 3365
  • [6] Stochastic Diagonal Approximate Greatest Descent in Convolutional Neural Networks
    Tan, Hong Hui
    Lim, King Hann
    Harno, Hendra G.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS (ICSIPA), 2017, : 451 - 454
  • [7] Acceleration Techniques for Automated Design of Approximate Convolutional Neural Networks
    Pinos, Michal
    Mrazek, Vojtech
    Vaverka, Filip
    Vasicek, Zdenek
    Sekanina, Lukas
    [J]. IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2023, 13 (01) : 212 - 224
  • [8] Exploiting Approximate Computing for Efficient and Reliable Convolutional Neural Networks
    Bosio, Alberto
    Deveautour, Bastien
    O'Connor, Ian
    [J]. 2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 326 - 326
  • [9] Output Layer Multiplication for Class Imbalance Problem in Convolutional Neural Networks
    Zhao Yang
    Yuanxin Zhu
    Tie Liu
    Sai Zhao
    Yunyan Wang
    Dapeng Tao
    [J]. Neural Processing Letters, 2020, 52 : 2637 - 2653
  • [10] Output Layer Multiplication for Class Imbalance Problem in Convolutional Neural Networks
    Yang, Zhao
    Zhu, Yuanxin
    Liu, Tie
    Zhao, Sai
    Wang, Yunyan
    Tao, Dapeng
    [J]. NEURAL PROCESSING LETTERS, 2020, 52 (03) : 2637 - 2653