Efficient Neural Networks on the Edge with FPGAs by Optimizing an Adaptive Activation Function

被引:2
|
作者
Jiang, Yiyue [1 ]
Vaicaitis, Andrius [2 ]
Dooley, John [2 ]
Leeser, Miriam [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Maynooth Univ, Dept Elect Engn, Maynooth W23 F2H6, Ireland
关键词
adaptive activation function (AAF); neural network; FPGA; deep learning; digital predistortion; DIGITAL PREDISTORTION; MODEL;
D O I
10.3390/s24061829
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The implementation of neural networks (NNs) on edge devices enables local processing of wireless data, but faces challenges such as high computational complexity and memory requirements when deep neural networks (DNNs) are used. Shallow neural networks customized for specific problems are more efficient, requiring fewer resources and resulting in a lower latency solution. An additional benefit of the smaller network size is that it is suitable for real-time processing on edge devices. The main concern with shallow neural networks is their accuracy performance compared to DNNs. In this paper, we demonstrate that a customized adaptive activation function (AAF) can meet the accuracy of a DNN. We designed an efficient FPGA implementation for a customized segmented spline curve neural network (SSCNN) structure to replace the traditional fixed activation function with an AAF. We compared our SSCNN with different neural network structures such as a real-valued time-delay neural network (RVTDNN), an augmented real-valued time-delay neural network (ARVTDNN), and deep neural networks with different parameters. Our proposed SSCNN implementation uses 40% fewer hardware resources and no block RAMs compared to the DNN with similar accuracy. We experimentally validated this computationally efficient and memory-saving FPGA implementation of the SSCNN for digital predistortion of radio-frequency (RF) power amplifiers using the AMD/Xilinx RFSoC ZCU111. The implemented solution uses less than 3% of the available resources. The solution also enables an increase of the clock frequency to 221.12 MHz, allowing the transmission of wide bandwidth signals.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Optimizing Neural Networks for Efficient FPGA Implementation: A Survey
    Riadh Ayachi
    Yahia Said
    Abdessalem Ben Abdelali
    Archives of Computational Methods in Engineering, 2021, 28 : 4537 - 4547
  • [32] Activation Function Architectures for FPGAs
    Langhammer, Martin
    Pasca, Bogdan
    2018 28TH INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS (FPL), 2018, : 43 - 50
  • [33] Multiple neural-network-based adaptive controller using orthonormal activation function neural networks
    Shukla, D
    Dawson, DM
    Paul, FW
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (06): : 1494 - 1501
  • [34] The Case for Adaptive Deep Neural Networks in Edge Computing
    McNamee, Francis
    Dustdar, Schahram
    Kilpatrick, Peter
    Shi, Weisong
    Spence, Ivor
    Varghese, Blesson
    2021 IEEE 14TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD 2021), 2021, : 43 - 52
  • [35] Adaptive activation functions in convolutional neural networks
    Qian, Sheng
    Liu, Hua
    Liu, Cheng
    Wu, Si
    Wong, Hau San
    NEUROCOMPUTING, 2018, 272 : 204 - 212
  • [36] E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs
    Li, Zhe
    Ding, Caiwen
    Wang, Siyue
    Wen, Wujie
    Zhuo, Youwei
    Liu, Chang
    Qiu, Qinru
    Xu, Wenyao
    Lin, Xue
    Qian, Xuehai
    Wang, Yanzhi
    2019 25TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA), 2019, : 69 - 80
  • [37] Neural networks with asymmetric activation function for function approximation
    Gomes, Gecynalda S. da S.
    Ludermir, Teresa B.
    Almeida, Leandro M.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 2310 - 2317
  • [38] ALEC: An adaptive learning framework for optimizing artificial neural networks
    Abraham, A
    Nath, B
    COMPUTATIONAL SCIENCE -- ICCS 2001, PROCEEDINGS PT 2, 2001, 2074 : 171 - 180
  • [39] Adaptive basis function for artificial neural networks
    Philip, NS
    Joseph, KB
    NEUROCOMPUTING, 2002, 47 : 21 - 34
  • [40] Selective Hardening for Neural Networks in FPGAs
    Libano, F.
    Wilson, B.
    Anderson, J.
    Wirthlin, M. J.
    Cazzaniga, C.
    Frost, C.
    Rech, P.
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2019, 66 (01) : 216 - 222