Efficient Convolution Architectures for Convolutional Neural Network

被引:0
|
作者
Wang, Jichen [1 ]
Lin, Jun [1 ]
Wang, Zhongfeng [1 ]
机构
[1] Nanjing Univ, Sch Elect Sci & Engn, Nanjing, Jiangsu, Peoples R China
关键词
RECOGNITION;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Convolutional Neural Network (CNN) is the state-ofthe-art deep learning approach employed in various applications due to its remarkable performance. Convolutions in CNNs generally dominate the overall computation complexity and thus consume major computational power in real implementations. In this paper, efficient hardware architectures incorporating parallel fast finite impulse response (FIR) algorithm (FFA) for CNN convolution implementations are discussed. The theoretical derivation of 3 and 5 parallel FFAs is presented and the corresponding 3 and 5 parallel fast convolution units (FCUs) are proposed for most commonly used 3 x 3 and 5 x 5 convolutional kernels in CNNs, respectively. Compared to conventional CNN convolution architectures, the proposed FCUs reduce the number of multiplications used in convolutions significantly. Additionally, the FCUs minimize the number of reads from the feature map memory. Furthermore, a reconfigurable FCU architecture which suits the convolutions of both 3 x 3 and 5 x 5 kernels is proposed. Based on this, an efficient top-level architecture for processing a complete convolutional layer in a CNN is developed. To quantize the benefits of the proposed FCUs, the design of an FCU is coded with RTL and synthesized with TSMC 90nrn CMOS technology. The implementation results demonstrate that 30% and 36% of the computational energy can be saved compared to conventional solutions with 3 x 3 and 5 x 5 kernels in CNN, respectively.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Efficient Fast Convolution Architectures for Convolutional Neural Network
    Xu, Weihong
    Wang, Zhongfeng
    You, Xiaohu
    Zhang, Chuan
    [J]. 2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 904 - 907
  • [2] Bandwidth Efficient Architectures for Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    [J]. PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2018, : 94 - 99
  • [3] Efficient Hardware Architectures for Deep Convolutional Neural Network
    Wang, Jichen
    Lin, Jun
    Wang, Zhongfeng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2018, 65 (06) : 1941 - 1953
  • [4] Optimizing Convolutional Neural Network Architectures
    Balderas, Luis
    Lastra, Miguel
    Benitez, Jose M.
    [J]. MATHEMATICS, 2024, 12 (19)
  • [5] Coupled convolution layer for convolutional neural network
    Uchida, Kazutaka
    Tanaka, Masayuki
    Okutomi, Masatoshi
    [J]. NEURAL NETWORKS, 2018, 105 : 197 - 205
  • [6] Coupled Convolution Layer for Convolutional Neural Network
    Uchida, Kazutaka
    Tanaka, Masayuki
    Okutomi, Masatoshi
    [J]. 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 3548 - 3553
  • [7] A review of convolutional neural network architectures and their optimizations
    Cong, Shuang
    Zhou, Yang
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (03) : 1905 - 1969
  • [8] A review of convolutional neural network architectures and their optimizations
    Shuang Cong
    Yang Zhou
    [J]. Artificial Intelligence Review, 2023, 56 : 1905 - 1969
  • [9] DSC-Ghost-Conv: A compact convolution module for building efficient neural network architectures
    Tao Wang
    Shiqing Zhang
    [J]. Multimedia Tools and Applications, 2024, 83 : 36767 - 36795
  • [10] DSC-Ghost-Conv: A compact convolution module for building efficient neural network architectures
    Wang, Tao
    Zhang, Shiqing
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (12) : 36767 - 36795