Design Space Exploration of Neural Network Activation Function Circuits

被引:44
|
作者
Yang, Tao [1 ]
Wei, Yadong [1 ]
Tu, Zhijun [1 ]
Zeng, Haolun [1 ]
Kinsy, Michel A. [2 ]
Zheng, Nanning [1 ]
Ren, Pengju [1 ]
机构
[1] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Shaanxi, Peoples R China
[2] Boston Univ, Dept Elect & Comp Engn, Boston, MA 02215 USA
基金
中国国家自然科学基金;
关键词
Activation functions; artificial neural networks (ANNs); exponential linear units (ELUs); hyperbolic tangent (tanh); scaled ELUs (SELUs); SIGMOID FUNCTION; GENERATORS;
D O I
10.1109/TCAD.2018.2871198
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The widespread application of artificial neural networks has prompted researchers to experiment with field-programmable gate array and customized ASIC designs to speed up their computation. These implementation efforts have generally focused on weight multiplication and signal summation operations, and less on activation functions used in these applications. Yet, efficient hardware implementations of nonlinear activation functions like exponential linear units (ELU), scaled ELU (SELU), and hyperbolic tangent (tanh), are central to designing effective neural network accelerators, since these functions require lots of resources. In this paper, we explore efficient hardware implementations of activation functions using purely combinational circuits, with a focus on two widely used nonlinear activation functions, i.e., SELU and tanh. Our experiments demonstrate that neural networks are generally insensitive to the precision of the activation function. The results also prove that the proposed combinational circuit-based approach is very efficient in terms of speed and area, with negligible accuracy loss on the MNIST, CIFAR-10, and IMAGENET benchmarks. Synopsys design compiler synthesis results show that circuit designs for tanh and SELU can save between x3.13 similar to x7.69 and x4.45 similar to x8.45 area compared to the look-up table/memory-based implementations, and can operate at 5.14 GHz and 4.52 GHz using the 28-nm SVT library, respectively. The implementation is available at: https://github.com/ThomasMrY/ActivationFunctionDemo.
引用
收藏
页码:1974 / 1978
页数:5
相关论文
共 50 条
  • [1] Design Space Exploration for YOLO Neural Network Accelerator
    Huang, Hongmin
    Liu, Zihao
    Chen, Taosheng
    Hu, Xianghong
    Zhang, Qiming
    Xiong, Xiaoming
    [J]. ELECTRONICS, 2020, 9 (11) : 1 - 15
  • [2] Evolutionary design space exploration for median circuits
    Sekanina, L
    [J]. APPLICATIONS OF EVOLUTIONARY COMPUTING, 2004, 3005 : 240 - 249
  • [3] Probabilistic invertible neural network for inverse design space exploration and reasoning
    Zhang, Yiming
    Pan, Zhiwei
    Zhang, Shuyou
    Qiu, Na
    [J]. ELECTRONIC RESEARCH ARCHIVE, 2022, 31 (02): : 860 - 881
  • [4] Design space exploration of neural network accelerator based on transfer learning
    吴豫章
    ZHI Tian
    SONG Xinkai
    LI Xi
    [J]. High Technology Letters, 2023, 29 (04) : 416 - 426
  • [5] ACCDSE: A Design Space Exploration Framework for Convolutional Neural Network Accelerator
    Li, Zhisheng
    Wang, Lei
    Dou, Qiang
    Tang, Yuxing
    Guo, Shasha
    Zhou, Haifang
    Lu, Wenyuan
    [J]. COMPUTER ENGINEERING AND TECHNOLOGY, NCCET 2017, 2018, 600 : 22 - 34
  • [6] GANDSE: Generative Adversarial Network-based Design Space Exploration for Neural Network Accelerator Design
    Feng, Lang
    Liu, Wenjian
    Guo, Chuliang
    Tang, Ke
    Zhuo, Cheng
    Wang, Zhongfeng
    [J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2023, 28 (03)
  • [7] Design of activation function in speech enhanced deep neural network
    Lu Xun
    Li Wei-Yong
    Yuan Min-Min
    Zuo Yi
    Hu Wenlin
    Wang Jie
    Yan Zhi-Hao
    [J]. PROCEEDINGS OF 2021 7TH INTERNATIONAL CONFERENCE ON CONDITION MONITORING OF MACHINERY IN NON-STATIONARY OPERATIONS (CMMNO), 2021, : 213 - 218
  • [8] Improving Performance Estimation for Design Space Exploration for Convolutional Neural Network Accelerators
    Ferianc, Martin
    Fan, Hongxiang
    Manocha, Divyansh
    Zhou, Hongyu
    Liu, Shuanglong
    Niu, Xinyu
    Luk, Wayne
    [J]. ELECTRONICS, 2021, 10 (04) : 1 - 14
  • [9] Design Space Exploration Scheme for Mapping Convolutional Neural Network on Zynq Zedboard
    Ul Hassan, M. Sohaib
    Khan, Umar S.
    Khawaja, Sajid Gul
    [J]. PROCEEDINGS OF THE 2020 12TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE (ECAI-2020), 2020,
  • [10] An Artificial Neural Network Assisted Optimization System for Analog Design Space Exploration
    Li, Yaping
    Wang, Yong
    Li, Yusong
    Zhou, Ranran
    Lin, Zhaojun
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (10) : 2640 - 2653