VLSI implementation of a Binary Neural Network-two case studies

被引:0
|
作者
Bermak, A [1 ]
Austin, J [1 ]
机构
[1] Edith Cowan Univ, Sch Engn & Mech, Joondalup, WA 6027, Australia
关键词
Binary Neural Networks; VLSI implementation; bit-level architecture; internal storage processors;
D O I
10.1109/MN.1999.758889
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A comparison between a bit-level and a conventional VLSI implementation of a binary neural network is presented. This network is based on Correlation Matrix Memory (CMM) that stores relationships between pairs of binary vectors. The tit-level architecture consists of an n x m array of bit-level processors holding the storage and computation elements. The conventional CMM architecture consists of a RAM memory holding the CMM storage and an array of counters., Since we are interested in the VLSI implementation of such networks the hardware complexities and speeds of both bit-level and conventional architecture were compared by using VLSI tools. It is shown that a significant speedup is achieved by using the bit-level architecture since the speed of this last configuration is not limited by the memory addressing delay. Moreover, the bit-level architecture is very simple and reduces the bus/routing, making the architecture suitable for VLSI implementation. The main drawback of such an approach compared to the conventional one is the demand for a high number of adders for dealing with a large number of inputs.
引用
收藏
页码:374 / 379
页数:6
相关论文
共 50 条
  • [31] Efficient VLSI implementation of modular neural network based hybrid median filter
    Nanduri, Sambamurthy
    Kamaraju, M.
    Soft Computing, 2023,
  • [32] VLSI implementation of a neural network classifier based on the saturating linear activation function
    Bermak, A
    Bouzerdoum, A
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 981 - 985
  • [33] VLSI implementation of transcendental function hyperbolic tangent for deep neural network accelerators
    Rajput, Gunjan
    Raut, Gopal
    Chandra, Mahesh
    Vishvakarma, Santosh Kumar
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 84
  • [34] ANALOG VLSI IMPLEMENTATION OF NEURAL NETWORKS
    VITTOZ, E
    ARTIFICIAL NEURAL NETWORKS : JOURNEES DELECTRONIQUE 1989, 1989, : 223 - 250
  • [35] VLSI realization of neural network
    Tan, Xilin
    Hu, Jincai
    Lang, Wayne
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 1993, 21 (04): : 98 - 100
  • [36] Optimal VLSI implementation of neural networks
    Beiu, V
    NEURAL NETWORKS AND THEIR APPLICATIONS, 1996, : 255 - 276
  • [37] A Novel, Efficient Implementation of a Local Binary Convolutional Neural Network
    Lin, Ing-Chao
    Tang, Chi-Huan
    Ni, Chi-Ting
    Hu, Xing
    Shen, Yu-Tong
    Chen, Pei-Yin
    Xie, Yuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (04) : 1413 - 1417
  • [38] Efficient FPGA Implementation of Local Binary Convolutional Neural Network
    Zhakatayev, Aidyn
    Lee, Jongeun
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 699 - 704
  • [39] An Approach of Binary Neural Network Energy-Efficient Implementation
    Gao, Jiabao
    Liu, Qingliang
    Lai, Jinmei
    ELECTRONICS, 2021, 10 (15)
  • [40] VLSI implementation of pulsating neural networks
    Schwartzglass, O
    Agranat, AJ
    NEUROCOMPUTING, 1996, 10 (04) : 405 - 413