Efficient Neural Networks on the Edge with FPGAs by Optimizing an Adaptive Activation Function

被引:2
|
作者
Jiang, Yiyue [1 ]
Vaicaitis, Andrius [2 ]
Dooley, John [2 ]
Leeser, Miriam [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Maynooth Univ, Dept Elect Engn, Maynooth W23 F2H6, Ireland
关键词
adaptive activation function (AAF); neural network; FPGA; deep learning; digital predistortion; DIGITAL PREDISTORTION; MODEL;
D O I
10.3390/s24061829
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The implementation of neural networks (NNs) on edge devices enables local processing of wireless data, but faces challenges such as high computational complexity and memory requirements when deep neural networks (DNNs) are used. Shallow neural networks customized for specific problems are more efficient, requiring fewer resources and resulting in a lower latency solution. An additional benefit of the smaller network size is that it is suitable for real-time processing on edge devices. The main concern with shallow neural networks is their accuracy performance compared to DNNs. In this paper, we demonstrate that a customized adaptive activation function (AAF) can meet the accuracy of a DNN. We designed an efficient FPGA implementation for a customized segmented spline curve neural network (SSCNN) structure to replace the traditional fixed activation function with an AAF. We compared our SSCNN with different neural network structures such as a real-valued time-delay neural network (RVTDNN), an augmented real-valued time-delay neural network (ARVTDNN), and deep neural networks with different parameters. Our proposed SSCNN implementation uses 40% fewer hardware resources and no block RAMs compared to the DNN with similar accuracy. We experimentally validated this computationally efficient and memory-saving FPGA implementation of the SSCNN for digital predistortion of radio-frequency (RF) power amplifiers using the AMD/Xilinx RFSoC ZCU111. The implemented solution uses less than 3% of the available resources. The solution also enables an increase of the clock frequency to 221.12 MHz, allowing the transmission of wide bandwidth signals.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Optimizing nonlinear activation function for convolutional neural networks
    Munender Varshney
    Pravendra Singh
    Signal, Image and Video Processing, 2021, 15 : 1323 - 1330
  • [2] Optimizing nonlinear activation function for convolutional neural networks
    Varshney, Munender
    Singh, Pravendra
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (06) : 1323 - 1330
  • [3] Neural networks with adaptive spline activation function
    Campolucci, P
    Capparelli, F
    Guarnieri, S
    Piazza, F
    Uncini, A
    MELECON '96 - 8TH MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, PROCEEDINGS, VOLS I-III: INDUSTRIAL APPLICATIONS IN POWER SYSTEMS, COMPUTER SCIENCE AND TELECOMMUNICATIONS, 1996, : 1442 - 1445
  • [4] Adaptive Morphing Activation Function for Neural Networks
    Herrera-Alcantara, Oscar
    Arellano-Balderas, Salvador
    FRACTAL AND FRACTIONAL, 2024, 8 (08)
  • [5] An adaptive activation function for higher order neural networks
    Xu, SX
    Zhang, M
    AL 2002: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2002, 2557 : 356 - 362
  • [6] An adaptive activation function for multilayer feedforward neural networks
    Yu, CC
    Tang, YC
    Liu, BD
    2002 IEEE REGION 10 CONFERENCE ON COMPUTERS, COMMUNICATIONS, CONTROL AND POWER ENGINEERING, VOLS I-III, PROCEEDINGS, 2002, : 645 - 650
  • [7] An Efficient Asymmetric Nonlinear Activation Function for Deep Neural Networks
    Chai, Enhui
    Yu, Wei
    Cui, Tianxiang
    Ren, Jianfeng
    Ding, Shusheng
    SYMMETRY-BASEL, 2022, 14 (05):
  • [8] Efficient Implementation of Activation Function on FPGA for Accelerating Neural Networks
    Qian, Kai
    Liu, Yinqiu
    Zhang, Zexu
    Wang, Kun
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [9] Realization of the sigmoid activation function for neural networks on current FPGAs by the table-driven method
    V. Ushenina, Inna
    VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE, 2024, (69):
  • [10] Optimizing Graph Neural Networks for Jet Tagging in Particle Physics on FPGAs
    Que, Zhiqiang
    Loo, Marcus
    Fan, Hongxiang
    Pierini, Maurizio
    Tapper, Alexander
    Luk, Wayne
    2022 32ND INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS, FPL, 2022, : 327 - 333