Filter pruning with uniqueness mechanism in the frequency domain for efficient neural networks

被引:6
|
作者
Zhang, Shuo [1 ]
Gao, Mingqi [2 ,3 ]
Ni, Qiang [1 ]
Han, Jungong [4 ]
机构
[1] Univ Lancaster, Sch Comp & Commun, Lancaster LA1 4WA, England
[2] Univ Warwick, WMG Data Sci, Coventry CV4 7AL, England
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Univ Sheffield, Dept Comp Sci, 211 Portobello, Sheffield S1 4DP, England
关键词
Deep learning; Model compression; Computer vision; Image classification; Frequency -domain transformation;
D O I
10.1016/j.neucom.2023.02.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning has drawn extensive attention due to its advantage in reducing computational costs and memory requirements of deep convolutional neural networks. However, most existing methods only prune filters based on their intrinsic properties or spatial feature maps, ignoring the correlation between filters. In this paper, we suggest the correlation is valuable and consider it from a novel view: the fre-quency domain. Specifically, we first transfer features to the frequency domain by Discrete Cosine Transform (DCT). Then, for each feature map, we compute a uniqueness score, which measures its prob-ability of being replaced by others. This way allows to prune the filters corresponding to the low -uniqueness maps without significant performance degradation. Compared to the methods focusing on intrinsic properties, our proposed method introduces a more comprehensive criterion to prune filters, further improving the network compactness while preserving good performance. In addition, our method is more robust against noise than the spatial ones since the critical clues for pruning are more concen-trated after DCT. Experimental results demonstrate the superiority of our method. To be specific, our method outperforms the baseline ResNet-56 by 0.38% on CIFAR-10 while reducing the floating-point operations (FLOPs) by 47.4%. In addition, a consistent improvement can be observed when pruning the baseline ResNet-110: 0.23% performance increase and up to 71% FLOPs drop. Finally, on ImageNet, our method reduces the FLOPs of the baseline ResNet-50 by 48.7% with only 0.32% accuracy loss.(c) 2023 Published by Elsevier B.V.
引用
收藏
页码:116 / 124
页数:9
相关论文
共 50 条
  • [41] Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration
    Xu, Jianrong
    Diao, Boyu
    Cui, Bifeng
    Yang, Kang
    Li, Chao
    Hong, Hailong
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [42] Efficient and sparse neural networks by pruning weights in a multiobjective learning approach
    Reiners, Malena
    Klamroth, Kathrin
    Heldmann, Fabian
    Stiglmayr, Michael
    [J]. COMPUTERS & OPERATIONS RESEARCH, 2022, 141
  • [43] Packing Convolutional Neural Networks in the Frequency Domain
    Wang, Yunhe
    Xu, Chang
    Xu, Chao
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (10) : 2495 - 2510
  • [44] Frequency domain analysis of NARX neural networks
    Chance, JE
    Worden, K
    Tomlinson, GR
    [J]. JOURNAL OF SOUND AND VIBRATION, 1998, 213 (05) : 915 - 941
  • [45] Compressing Convolutional Neural Networks in the Frequency Domain
    Chen, Wenlin
    Wilson, James
    Tyree, Stephen
    Weinberger, Kilian Q.
    Chen, Yixin
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1475 - 1484
  • [46] Efficient Distributed Inference of Deep Neural Networks via Restructuring and Pruning
    Abdi, Afshin
    Rashidi, Saeed
    Fekri, Faramarz
    Krishna, Tushar
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6640 - 6648
  • [47] Structured Pruning for Efficient Convolutional Neural Networks via Incremental Regularization
    Wang, Huan
    Hu, Xinyi
    Zhang, Qiming
    Wang, Yuehai
    Yu, Lu
    Hu, Haoji
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 775 - 788
  • [48] Weak sub-network pruning for strong and efficient neural networks
    Guo, Qingbei
    Wu, Xiao-Jun
    Kittler, Josef
    Feng, Zhiquan
    [J]. NEURAL NETWORKS, 2021, 144 : 614 - 626
  • [49] An efficient pruning scheme of deep neural networks for Internet of Things applications
    Qi, Chen
    Shen, Shibo
    Li, Rongpeng
    Zhao, Zhifeng
    Liu, Qing
    Liang, Jing
    Zhang, Honggang
    [J]. EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2021, 2021 (01)
  • [50] Efficient pruning method for ensemble self-generating neural networks
    Inoue, H
    Narihisa, H
    [J]. CCCT 2003, VOL 1, PROCEEDINGS: COMPUTING/INFORMATION SYSTEMS AND TECHNOLOGIES, 2003, : 58 - 63