Filter pruning with uniqueness mechanism in the frequency domain for efficient neural networks

被引:6
|
作者
Zhang, Shuo [1 ]
Gao, Mingqi [2 ,3 ]
Ni, Qiang [1 ]
Han, Jungong [4 ]
机构
[1] Univ Lancaster, Sch Comp & Commun, Lancaster LA1 4WA, England
[2] Univ Warwick, WMG Data Sci, Coventry CV4 7AL, England
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Univ Sheffield, Dept Comp Sci, 211 Portobello, Sheffield S1 4DP, England
关键词
Deep learning; Model compression; Computer vision; Image classification; Frequency -domain transformation;
D O I
10.1016/j.neucom.2023.02.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning has drawn extensive attention due to its advantage in reducing computational costs and memory requirements of deep convolutional neural networks. However, most existing methods only prune filters based on their intrinsic properties or spatial feature maps, ignoring the correlation between filters. In this paper, we suggest the correlation is valuable and consider it from a novel view: the fre-quency domain. Specifically, we first transfer features to the frequency domain by Discrete Cosine Transform (DCT). Then, for each feature map, we compute a uniqueness score, which measures its prob-ability of being replaced by others. This way allows to prune the filters corresponding to the low -uniqueness maps without significant performance degradation. Compared to the methods focusing on intrinsic properties, our proposed method introduces a more comprehensive criterion to prune filters, further improving the network compactness while preserving good performance. In addition, our method is more robust against noise than the spatial ones since the critical clues for pruning are more concen-trated after DCT. Experimental results demonstrate the superiority of our method. To be specific, our method outperforms the baseline ResNet-56 by 0.38% on CIFAR-10 while reducing the floating-point operations (FLOPs) by 47.4%. In addition, a consistent improvement can be observed when pruning the baseline ResNet-110: 0.23% performance increase and up to 71% FLOPs drop. Finally, on ImageNet, our method reduces the FLOPs of the baseline ResNet-50 by 48.7% with only 0.32% accuracy loss.(c) 2023 Published by Elsevier B.V.
引用
收藏
页码:116 / 124
页数:9
相关论文
共 50 条
  • [1] Holistic Filter Pruning for Efficient Deep Neural Networks
    Enderich, Lukas
    Timm, Fabian
    Burgard, Wolfram
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2595 - 2604
  • [2] Frequency-Domain Dynamic Pruning for Convolutional Neural Networks
    Liu, Zhenhua
    Xu, Jizheng
    Peng, Xiulian
    Xiong, Ruiqin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Auto-Balanced Filter Pruning for Efficient Convolutional Neural Networks
    Ding, Xiaohan
    Ding, Guiguang
    Han, Jungong
    Tang, Sheng
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6797 - 6804
  • [4] Filter Pruning for Efficient Transfer Learning in Deep Convolutional Neural Networks
    Reinhold, Caique
    Roisenberg, Mauro
    [J]. ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 191 - 202
  • [5] Efficient Convolution Neural Networks for Object Tracking Using Separable Convolution and Filter Pruning
    Mao, Yuanhong
    He, Zhanzhuang
    Ma, Zhong
    Tang, Xuehan
    Wang, Zhuping
    [J]. IEEE ACCESS, 2019, 7 : 106466 - 106474
  • [6] Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks
    Woo, Yunhee
    Kim, Dongyoung
    Jeong, Jaemin
    Ko, Young-Woong
    Lee, Jeong-Gun
    [J]. ELECTRONICS, 2021, 10 (11)
  • [7] Magnitude and Similarity Based Variable Rate Filter Pruning for Efficient Convolution Neural Networks
    Ghimire, Deepak
    Kim, Seong-Heum
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [8] A Novel Clustering-Based Filter Pruning Method for Efficient Deep Neural Networks
    Wei, Xiaohui
    Shen, Xiaoxian
    Zhou, Changbao
    Yue, Hengshan
    [J]. ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 245 - 258
  • [9] Filter Pruning via Learned Representation Median in the Frequency Domain
    Zhang, Xin
    Xie, Weiying
    Li, Yunsong
    Lei, Jie
    Du, Qian
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (05) : 3165 - 3175
  • [10] FRACTIONAL STEP DISCRIMINANT PRUNING: A FILTER PRUNING FRAMEWORK FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Gkalelis, Nikolaos
    Mezaris, Vasileios
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,