Design of Power-Efficient Approximate Multipliers for Approximate Artificial Neural Networks

被引:82
|
作者
Mrazek, Vojtech [1 ]
Sarwar, Syed Shakib [2 ]
Sekanina, Lukas [1 ]
Vasicek, Zdenek [1 ]
Roy, Kaushik [2 ]
机构
[1] Brno Univ Technol, Fac Informat Technol, Ctr Excellence IT4Innovat, CS-61090 Brno, Czech Republic
[2] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
关键词
D O I
10.1145/2966986.2967021
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91% power reduction of multiplication led to classification accuracy degradation of less than 2.80%. Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers
    Ansari, Mohammad Saeed
    Mrazek, Vojtech
    Cockburn, Bruce F.
    Sekanina, Lukas
    Vasicek, Zdenek
    Han, Jie
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (02) : 317 - 328
  • [32] AxSA: On the Design of High-Performance and Power-Efficient Approximate Systolic Arrays for Matrix Multiplication
    Haroon Waris
    Chenghua Wang
    Weiqiang Liu
    Fabrizio Lombardi
    Journal of Signal Processing Systems, 2021, 93 : 605 - 615
  • [33] Exploring Approximate Adders for Power-Efficient Harmonics Elimination Hardware Architectures
    Pereira, Pedro T. L.
    Paim, Guilherme
    Ferreira, Guilherme
    Costa, Eduardo
    Almeida, Sergio
    Bampi, Sergio
    2021 IEEE 12TH LATIN AMERICA SYMPOSIUM ON CIRCUITS AND SYSTEM (LASCAS), 2021,
  • [34] How can artificial neural networks approximate the brain?
    Shao, Feng
    Shen, Zheng
    FRONTIERS IN PSYCHOLOGY, 2023, 13
  • [35] Power-Efficient Accelerator Design for Neural Networks Using Computation Reuse
    Yasoubi, Ali
    Hojabr, Reza
    Modarressi, Mehdi
    IEEE COMPUTER ARCHITECTURE LETTERS, 2017, 16 (01) : 72 - 75
  • [36] Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training
    Cheng, TaiYu
    Masuda, Yukata
    Chen, Jun
    Yu, Jaehoon
    Hashimoto, Masanori
    INTEGRATION-THE VLSI JOURNAL, 2020, 74 : 19 - 31
  • [37] Modeling the effects of power efficient approximate multipliers in radio astronomy correlators
    Kokkeler, A. B. J.
    Gillani, G. A.
    Boonstra, A. J.
    EXPERIMENTAL ASTRONOMY, 2024, 57 (02)
  • [38] Modeling the effects of power efficient approximate multipliers in radio astronomy correlators
    A. B. J. Kokkeler
    G. A. Gillani
    A. J. Boonstra
    Experimental Astronomy, 2024, 57
  • [39] Design of Approximate Redundant Binary Multipliers
    Cao, Tian
    Liu, Weiqiang
    Wang, Chenghua
    Cui, Xioping
    Lombardi, Fabrizio
    PROCEEDINGS OF THE 2016 IEEE/ACM INTERNATIONAL SYMPOSIUM ON NANOSCALE ARCHITECTURES (NANOARCH), 2016, : 31 - 36
  • [40] A Power-efficient Accelerator for Convolutional Neural Networks
    Sun, Fan
    Wang, Chao
    Gong, Lei
    Xu, Chongchong
    Zhang, Yiwei
    Lu, Yuntao
    Li, Xi
    Zhou, Xuehai
    2017 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2017, : 631 - 632