A Fourier domain acceleration framework for convolutional neural networks

被引:9
|
作者
Lin, Jinhua [1 ,2 ]
Ma, Lin [3 ]
Yao, Yu [1 ]
机构
[1] Changchun Univ Technol, Sch Comp Applicat Technol, Yanan St 2055, Changchun, Jilin, Peoples R China
[2] Univ Chinese Acad Sci, Machinery & Elect Engn, Yu Quan Rd 19, Beijing, Peoples R China
[3] FAW Foundry Co Ltd, DongFeng St 83, Changchun, Jilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural networks; Deep learning; Forward/backward propagation passes; Activation function; Downsampling operations; LANGUAGE;
D O I
10.1016/j.neucom.2019.06.080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Acceleration of training and inference of convolutional neural networks (CNNs) plays a significant role in deep learning efforts for large-scale datasets. However, it is difficult to accelerate the training and inference of CNNs based on traditional Fourier domain acceleration frameworks because Fourier domain training and inference are related to many complicated factors, such as the architecture of Fourier domain propagation passes, the representation of the activation function and the design of downsampling operations. A conceptually intuitive, useful and general Fourier domain acceleration framework for CNNs is proposed in this paper. Taking the proposed Fourier domain rectified linear unit (FReLU) as an activation function and the proposed Fourier domain pooling function (FPool) as a downsampling function, a Fourier domain acceleration framework is established for CNNs, and the inverse activation function (FReLU-1) and inverse downsampling function (FPool(-1)) are further obtained for the backward propagation pass. Furthermore, a block decomposition pipeline is integrated into the Fourier domain forward/backward propagation passes of CNNs to accelerate the training and inference of CNNs. The results show that the proposed acceleration framework can accelerate the training and inference of CNNs by a significant factor without reducing the recognition precision. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:254 / 268
页数:15
相关论文
共 50 条
  • [1] A Fourier Domain Training Framework for Convolutional Neural Networks Based on the Fourier Domain Pyramid Pooling Method and Fourier Domain Exponential Linear Unit
    Lin, Jinhua
    Ma, Lin
    Yao, Yu
    [J]. IEEE ACCESS, 2019, 7 : 116612 - 116631
  • [2] Frequency-Domain Inference Acceleration for Convolutional Neural Networks Using ReRAMs
    Liu, Bosheng
    Jiang, Zhuoshen
    Wu, Yalan
    Wu, Jigang
    Chen, Xiaoming
    Liu, Peng
    Zhou, Qingguo
    Han, Yinhe
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (12) : 3133 - 3146
  • [3] FCNN: Fourier Convolutional Neural Networks
    Pratt, Harry
    Williams, Bryan
    Coenen, Frans
    Zheng, Yalin
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2017, PT I, 2017, 10534 : 786 - 798
  • [4] FPGA Implementation and Acceleration of Convolutional Neural Networks
    Pisharody, Jayanth N.
    Pranav, K. B.
    Ranjitha, M.
    Rajeshwari, B.
    [J]. 2021 6TH INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2021,
  • [5] Efficient Hardware Acceleration of Convolutional Neural Networks
    Kala, S.
    Jose, Babita R.
    Mathew, Jimson
    Nalesh, S.
    [J]. 32ND IEEE INTERNATIONAL SYSTEM ON CHIP CONFERENCE (IEEE SOCC 2019), 2019, : 191 - 192
  • [6] Optimization and acceleration of convolutional neural networks: A survey
    Habib, Gousia
    Qureshi, Shaima
    [J]. JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (07) : 4244 - 4268
  • [7] A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks
    Wang, Dong
    Bai, Xiao
    Zhou, Lei
    Zhou, Jun
    [J]. 2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 768 - 775
  • [8] Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System
    Zhang, Chi
    Prasanna, Viktor
    [J]. FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS, 2017, : 35 - 44
  • [9] FAQ-CNN: A Flexible Acceleration Framework for Quantized Convolutional Neural Networks on Embedded FPGAs
    Xie K.
    Lu Y.
    Jin Z.
    Liu Y.
    Gong C.
    Chen X.
    Li T.
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (07): : 1409 - 1427
  • [10] A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration
    Ghimire, Deepak
    Kil, Dayoung
    Kim, Seong-heum
    [J]. ELECTRONICS, 2022, 11 (06)