Efficient Neural Networks on the Edge with FPGAs by Optimizing an Adaptive Activation Function

被引:2
|
作者
Jiang, Yiyue [1 ]
Vaicaitis, Andrius [2 ]
Dooley, John [2 ]
Leeser, Miriam [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Maynooth Univ, Dept Elect Engn, Maynooth W23 F2H6, Ireland
关键词
adaptive activation function (AAF); neural network; FPGA; deep learning; digital predistortion; DIGITAL PREDISTORTION; MODEL;
D O I
10.3390/s24061829
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The implementation of neural networks (NNs) on edge devices enables local processing of wireless data, but faces challenges such as high computational complexity and memory requirements when deep neural networks (DNNs) are used. Shallow neural networks customized for specific problems are more efficient, requiring fewer resources and resulting in a lower latency solution. An additional benefit of the smaller network size is that it is suitable for real-time processing on edge devices. The main concern with shallow neural networks is their accuracy performance compared to DNNs. In this paper, we demonstrate that a customized adaptive activation function (AAF) can meet the accuracy of a DNN. We designed an efficient FPGA implementation for a customized segmented spline curve neural network (SSCNN) structure to replace the traditional fixed activation function with an AAF. We compared our SSCNN with different neural network structures such as a real-valued time-delay neural network (RVTDNN), an augmented real-valued time-delay neural network (ARVTDNN), and deep neural networks with different parameters. Our proposed SSCNN implementation uses 40% fewer hardware resources and no block RAMs compared to the DNN with similar accuracy. We experimentally validated this computationally efficient and memory-saving FPGA implementation of the SSCNN for digital predistortion of radio-frequency (RF) power amplifiers using the AMD/Xilinx RFSoC ZCU111. The implemented solution uses less than 3% of the available resources. The solution also enables an increase of the clock frequency to 221.12 MHz, allowing the transmission of wide bandwidth signals.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] A robust learning algorithm for feedforward neural networks with adaptive spline activation function
    Hu, LY
    Sun, ZQ
    ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 566 - 571
  • [22] Exploiting FPGAs and Spiking Neural Networks at the Micro-Edge: The EdgeAI Approach
    Meloni, Paolo
    Busia, Paola
    Leone, Gianluca
    Martis, Luca
    Scrugli, Matteo A.
    APPLIED RECONFIGURABLE COMPUTING. ARCHITECTURES, TOOLS, AND APPLICATIONS, ARC 2024, 2024, 14553 : 296 - 302
  • [23] Efficient Deep Neural Networks for Edge Computing
    Alnemari, Mohammed
    Bagherzadeh, Nader
    2019 IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING (IEEE EDGE), 2019, : 1 - 7
  • [24] Weightless Neural Networks for Efficient Edge Inference
    Susskind, Zachary
    Arora, Aman
    Miranda, Igor D. S.
    Villon, Luis A. Q.
    Katopodis, Rafael F.
    de Araujo, Leandro S.
    Dutra, Diego L. C.
    Lima, Priscila M. V.
    Franca, Felipe M. G.
    Breternitz, Mauricio, Jr.
    John, Lizy K.
    PROCEEDINGS OF THE 2022 31ST INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, PACT 2022, 2022, : 279 - 290
  • [25] SpWA: An Efficient Sparse Winograd Convolutional Neural Networks Accelerator on FPGAs
    Lu, Liqiang
    Liang, Yun
    2018 55TH ACM/ESDA/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2018,
  • [26] An Efficient Hardware Accelerator for Structured Sparse Convolutional Neural Networks on FPGAs
    Zhu, Chaoyang
    Huang, Kejie
    Yang, Shuyuan
    Zhu, Ziqi
    Zhang, Hejia
    Shen, Haibin
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, 28 (09) : 1953 - 1965
  • [27] Periodic Function as Activation Function for Neural Networks
    Xu, Ding
    Guan, Yue
    Cai, Ping-ping
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE: TECHNIQUES AND APPLICATIONS, AITA 2016, 2016, : 179 - 183
  • [28] EFFICIENT IMPLEMENTATION OF PIECEWISE LINEAR ACTIVATION FUNCTION FOR DIGITAL VLSI NEURAL NETWORKS
    MYERS, DJ
    HUTCHINSON, RA
    ELECTRONICS LETTERS, 1989, 25 (24) : 1662 - 1663
  • [29] Optimizing Neural Networks for Efficient FPGA Implementation: A Survey
    Ayachi, Riadh
    Said, Yahia
    Ben Abdelali, Abdessalem
    ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2021, 28 (07) : 4537 - 4547
  • [30] An Efficient Quantitative Approach for Optimizing Convolutional Neural Networks
    Wang, Yuke
    Feng, Boyuan
    Peng, Xueqiao
    Ding, Yufei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2050 - 2059