FPGA Implementation of Neuron Model Using Piecewise Nonlinear Function on Double-Precision Floating-Point Format

被引:0
|
作者
Kawamura, Satoshi [1 ]
Saito, Masato [2 ]
Yoshida, Hitoaki [3 ]
机构
[1] Iwate Univ, Super Comp & Informat Sci Ctr, Morioka, Iwate 0208550, Japan
[2] P&A Technol Inc, Morioka, Iwate 0200834, Japan
[3] Iwate Univ, Fac Educ, Morioka, Iwate 0208550, Japan
关键词
Artificial neurons model; Field programmable gate array (FPGA); Sigmoid function; Chaotic behavior; Piecewise nonlinear function; DESIGN;
D O I
10.1007/978-3-319-42007-3_54
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The artificial neurons model has been implemented in a field programmable gate array (FPGA). The neuron model can be applied to learning, training of neural networks; all data types are 64 bits, and first and second-order functions is employed to approximate the sigmoid function. The constant values of the model are tuned to provide a sigmoid-like approximate function which is both continuous and continuously differentiable. All data types of the neuron are corresponding to double precision in C language. The neuron implementation is expressed in 48-stage pipeline. Assessment with an Altera Cyclone IV predicts an operating speed of 85 MHz. Simulation of 4 neurons neural network on FPGA obtained chaotic behavior. An FPGA output chaos influenced by calculation precision and characteristics of the output function. The circuit is the estimation that above 1,000 neurons can implement in Altera Cyclone IV. It shows the effectiveness of this FPGA model to have obtained the chaotic behavior where nonlinearity infuences greatly. Therefore, this model shows wide applied possibility.
引用
收藏
页码:620 / 629
页数:10
相关论文
共 50 条
  • [11] Conversion of Mersenne Twister to double-precision floating-point numbers
    Harase, Shin
    [J]. MATHEMATICS AND COMPUTERS IN SIMULATION, 2019, 161 : 76 - 83
  • [12] A VLSI Implementation of Double Precision Floating-Point Logarithmic Function
    Hao, Liu
    Wang Ming-Jiang
    Chen Mo-Ran
    Ming, Liu
    [J]. 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 345 - 349
  • [13] High performance and energy efficient single-precision and double-precision merged floating-point adder on FPGA
    Zhang, Hao
    Chen, Dongdong
    Ko, Seok-Bum
    [J]. IET COMPUTERS AND DIGITAL TECHNIQUES, 2018, 12 (01): : 20 - 29
  • [14] An FPGA implementation of a fully verified double precision IEEE floating-point adder
    Kikkeri, Nikhil
    Seidel, Peter-Michael
    [J]. 2007 IEEE INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES, AND PROCESSORS, 2007, : 83 - 88
  • [15] Low-Latency Double-Precision Floating-Point Division for FPGAs
    Liebig, Bjoern
    Koch, Andreas
    [J]. PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (FPT), 2014, : 107 - 114
  • [16] Reaping the processing potential of FPGA on double-precision floating-point operations: an eigenvalue solver case study
    Huang, Miaoqing
    Kilic, Ozlem
    [J]. 2010 18TH IEEE ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2010), 2010, : 95 - 102
  • [17] Area- and power-efficient iterative single/double-precision merged floating-point multiplier on FPGA
    Zhang, Hao
    Chen, Dongdong
    Ko, Seok-Bum
    [J]. IET COMPUTERS AND DIGITAL TECHNIQUES, 2017, 11 (04): : 149 - 158
  • [18] A Latency-Effective Pipelined Divider for Double-Precision Floating-Point Numbers
    Yun, Juwon
    Lee, Jinyoung
    Chung, Woo-Nam
    Kim, Cheong Ghil
    Park, Woo-Chan
    [J]. IEEE ACCESS, 2020, 8 : 165740 - 165747
  • [19] FPC: A High-Speed Compressor for Double-Precision Floating-Point Data
    Burtscher, Martin
    Ratanaworabhan, Paruj
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2009, 58 (01) : 18 - 31
  • [20] Utilizing the Double-Precision Floating-Point Computing Power of GPUs for RSA Acceleration
    Dong, Jiankuo
    Zheng, Fangyu
    Pan, Wuqiong
    Lin, Jingqiang
    Jing, Jiwu
    Zhao, Yuan
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2017,