Filter pruning with uniqueness mechanism in the frequency domain for efficient neural networks

被引:6
|
作者
Zhang, Shuo [1 ]
Gao, Mingqi [2 ,3 ]
Ni, Qiang [1 ]
Han, Jungong [4 ]
机构
[1] Univ Lancaster, Sch Comp & Commun, Lancaster LA1 4WA, England
[2] Univ Warwick, WMG Data Sci, Coventry CV4 7AL, England
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Univ Sheffield, Dept Comp Sci, 211 Portobello, Sheffield S1 4DP, England
关键词
Deep learning; Model compression; Computer vision; Image classification; Frequency -domain transformation;
D O I
10.1016/j.neucom.2023.02.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning has drawn extensive attention due to its advantage in reducing computational costs and memory requirements of deep convolutional neural networks. However, most existing methods only prune filters based on their intrinsic properties or spatial feature maps, ignoring the correlation between filters. In this paper, we suggest the correlation is valuable and consider it from a novel view: the fre-quency domain. Specifically, we first transfer features to the frequency domain by Discrete Cosine Transform (DCT). Then, for each feature map, we compute a uniqueness score, which measures its prob-ability of being replaced by others. This way allows to prune the filters corresponding to the low -uniqueness maps without significant performance degradation. Compared to the methods focusing on intrinsic properties, our proposed method introduces a more comprehensive criterion to prune filters, further improving the network compactness while preserving good performance. In addition, our method is more robust against noise than the spatial ones since the critical clues for pruning are more concen-trated after DCT. Experimental results demonstrate the superiority of our method. To be specific, our method outperforms the baseline ResNet-56 by 0.38% on CIFAR-10 while reducing the floating-point operations (FLOPs) by 47.4%. In addition, a consistent improvement can be observed when pruning the baseline ResNet-110: 0.23% performance increase and up to 71% FLOPs drop. Finally, on ImageNet, our method reduces the FLOPs of the baseline ResNet-50 by 48.7% with only 0.32% accuracy loss.(c) 2023 Published by Elsevier B.V.
引用
收藏
页码:116 / 124
页数:9
相关论文
共 50 条
  • [21] TRP: Trained Rank Pruning for Efficient Deep Neural Networks
    Xu, Yuhui
    Li, Yuxi
    Zhang, Shuai
    Wen, Wei
    Wang, Botao
    Qi, Yingyong
    Chen, Yiran
    Lin, Weiyao
    Xiong, Hongkai
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 977 - 983
  • [22] Structured Term Pruning for Computational Efficient Neural Networks Inference
    Huang, Kai
    Li, Bowen
    Chen, Siang
    Claesen, Luc
    Xi, Wei
    Chen, Junjian
    Jiang, Xiaowen
    Liu, Zhili
    Xiong, Dongliang
    Yan, Xiaolang
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (01) : 190 - 203
  • [23] Global balanced iterative pruning for efficient convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (23): : 21119 - 21138
  • [24] Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning
    Singh, Pravendra
    Verma, Vinay Kumar
    Rai, Piyush
    Namboodiri, Vinay P.
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 838 - 847
  • [25] Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
    He, Yang
    Ding, Yuhang
    Liu, Ping
    Zhu, Linchao
    Zhang, Hanwang
    Yang, Yi
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2006 - 2015
  • [26] SFP: Similarity-based filter pruning for deep neural networks
    Li, Guoqing
    Li, Rengang
    Li, Tuo
    Shen, Chaoyao
    Zou, Xiaofeng
    Wang, Jiuyang
    Wang, Changhong
    Li, Nanjun
    [J]. INFORMATION SCIENCES, 2025, 689
  • [27] Global balanced iterative pruning for efficient convolutional neural networks
    Jingfei Chang
    Yang Lu
    Ping Xue
    Yiqun Xu
    Zhen Wei
    [J]. Neural Computing and Applications, 2022, 34 : 21119 - 21138
  • [28] Using Feature Entropy to Guide Filter Pruning for Efficient Convolutional Networks
    Li, Yun
    Wang, Luyang
    Peng, Sifan
    Kumar, Aakash
    Yin, Baoqun
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 263 - 274
  • [29] Zero-Keep Filter Pruning for Energy Efficient Deep Neural Network
    Woo, Yunhee
    Kim, Dongyoung
    Jeong, Jaemin
    Ko, Young-Woong
    Lee, Jeong-Gun
    [J]. 11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 1288 - 1292
  • [30] Efficient realization of the block frequency domain adaptive filter
    Schobben, DWE
    Egelmeers, GPM
    Sommen, PCW
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 2257 - 2260