Applicability of approximate multipliers in hardware neural networks

被引:44
|
作者
Lotric, Uros [1 ]
Bulic, Patricio [1 ]
机构
[1] Univ Ljubljana, Fac Comp & Informat Sci, Ljubljana, Slovenia
关键词
Hardware neural network; Iterative logarithmic multiplier; FPGA; Digital design; Computer arithmetic; IMPLEMENTATION; PROGRESS;
D O I
10.1016/j.neucom.2011.09.039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years there has been a growing interest in hardware neural networks, which express many benefits over conventional software models, mainly in applications where speed, cost, reliability, or energy efficiency are of great importance. These hardware neural networks require many resource-, power- and time-consuming multiplication operations, thus special care must be taken during their design. Since the neural network processing can be performed in parallel, there is usually a requirement for designs with as many concurrent multiplication circuits as possible. One option to achieve this goal is to replace the complex exact multiplying circuits with simpler, approximate ones. The present work demonstrates the application of approximate multiplying circuits in the design of a feed-forward neural network model with on-chip learning ability. The experiments performed on a heterogeneous PROBEN1 benchmark dataset show that the adaptive nature of the neural network model successfully compensates for the calculation errors of the approximate multiplying circuits. At the same time, the proposed designs also profit from more computing power and increased energy efficiency. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:57 / 65
页数:9
相关论文
共 50 条
  • [41] CoEvolvable hardware platform for automatic hardware design of neural networks
    Hammami, O
    Kuroda, K
    Zhao, Q
    Saito, K
    PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY 2000, VOLS 1 AND 2, 2000, : 509 - 514
  • [42] Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey
    Armeniakos, Giorgos
    Zervakis, Georgios
    Soudris, Dimitrios
    Henkel, Joerg
    ACM COMPUTING SURVEYS, 2023, 55 (04)
  • [43] ARTIFICIAL NEURAL NETWORKS USING MOS ANALOG MULTIPLIERS
    HOLLIS, PW
    PAULOS, JJ
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 1990, 25 (03) : 849 - 855
  • [44] Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks
    Mohammadreza Esmali Nojehdeh
    Mustafa Altun
    Circuits, Systems, and Signal Processing, 2023, 42 : 5428 - 5452
  • [45] Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks
    Nojehdeh, Mohammadreza Esmali
    Altun, Mustafa
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2023, 42 (09) : 5428 - 5452
  • [46] Training Binarized Neural Networks Using Ternary Multipliers
    Ardakani, Amir
    Ardakani, Arash
    Gross, Warren J.
    IEEE DESIGN & TEST, 2021, 38 (06) : 44 - 52
  • [47] Efficient Utilization of FPGA Multipliers for Convolutional Neural Networks
    Boulasikis, M. A.
    Birbas, M.
    Tsafas, N.
    Kanakaris, N.
    2021 10TH INTERNATIONAL CONFERENCE ON MODERN CIRCUITS AND SYSTEMS TECHNOLOGIES (MOCAST), 2021,
  • [48] Approximate Multipliers Based on New Approximate Compressors
    Esposito, Darjn
    Strollo, Antonio Giuseppe Maria
    Napoli, Ettore
    De Caro, Davide
    Petra, Nicola
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2018, 65 (12) : 4169 - 4182
  • [49] Approximate Row-Merging-Based Multipliers for Neural Network Acceleration on FPGAs
    Aizaz, Zainab
    Khare, Kavita
    Tirmizi, Aizaz
    IEEE EMBEDDED SYSTEMS LETTERS, 2024, 16 (02) : 126 - 129
  • [50] Applicability domains of neural networks for toxicity prediction
    Perez-Santin, Efren
    de-la-Fuente-Valentin, Luis
    Garcia, Mariano Gonzalez
    Bravo, Kharla Andreina Segovia
    Hernandez, Fernando Carlos Lopez
    Sanchez, Jose Ignacio Lopez
    AIMS MATHEMATICS, 2023, 8 (11): : 27858 - 27900