Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks

被引:0
|
作者
He, Yang [1 ,2 ]
Kang, Guoliang [2 ]
Dong, Xuanyi [2 ]
Fu, Yanwei [3 ]
Yang, Yi [1 ,2 ]
机构
[1] Southern Univ Sci & Technol, SUSTech UTS Joint Ctr CIS, Shenzhen, Guangdong, Peoples R China
[2] Univ Technol Sydney, CAI, Sydney, NSW, Australia
[3] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
基金
澳大利亚研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposed a Soft Filter Pruning (SFP) method to accelerate the inference procedure of deep Convolutional Neural Networks (CNNs). Specifically, the proposed SFP enables the pruned filters to be updated when training the model after pruning. SFP has two advantages over previous works: (1) Larger model capacity. Updating previously pruned filters provides our approach with larger optimization space than fixing the filters to zero. Therefore, the network trained by our method has a larger model capacity to learn from the training data. (2) Less dependence on the pre-trained model. Large capacity enables SEP to train from scratch and prune the model simultaneously. In contrast, previous filter pruning methods should be conducted on the basis of the pre-trained model to guarantee their performance. Empirically, SFP from scratch outperforms the previous filter pruning methods. Moreover, our approach has been demonstrated effective for many advanced CNN architectures. Notably, on ILSCRC-2012, SFP reduces more than 42% FLOPs on ResNet-101 with even 0.2% top-5 accuracy improvement, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/soft-filter-pruning
引用
收藏
页码:2234 / 2240
页数:7
相关论文
共 50 条
  • [21] Channel Pruning for Accelerating Very Deep Neural Networks
    He, Yihui
    Zhang, Xiangyu
    Sun, Jian
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1398 - 1406
  • [22] Accelerating deep neural network filter pruning with mask-aware convolutional computations on modern CPUs
    Ma, Xiu
    Li, Guangli
    Liu, Lei
    Liu, Huaxiao
    Wang, Xueying
    NEUROCOMPUTING, 2022, 505 : 375 - 387
  • [23] Filter Pruning using Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks
    Mitsuno, Kakeru
    Kurita, Takio
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1089 - 1095
  • [24] HFP: Hardware-Aware Filter Pruning for Deep Convolutional Neural Networks Acceleration
    Yu, Fang
    Han, Chuanqi
    Wang, Pengcheng
    Huang, Ruoran
    Huang, Xi
    Cui, Li
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 255 - 262
  • [25] Structured Pruning for Deep Convolutional Neural Networks: A Survey
    He, Yang
    Xiao, Lingao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2900 - 2919
  • [26] Tutor-Instructing Global Pruning for Accelerating Convolutional Neural Networks
    Yu, Fang
    Cui, Li
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2792 - 2799
  • [27] RepSGD: Channel Pruning Using Reparamerization for Accelerating Convolutional Neural Networks
    Kim, Nam Joon
    Kim, Hyun
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [28] Channel pruning based on mean gradient for accelerating Convolutional Neural Networks
    Liu, Congcong
    Wu, Huaming
    SIGNAL PROCESSING, 2019, 156 : 84 - 91
  • [29] Complex hybrid weighted pruning method for accelerating convolutional neural networks
    Geng, Xu
    Gao, Jinxiong
    Zhang, Yonghui
    Xu, Dingtan
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [30] Complex hybrid weighted pruning method for accelerating convolutional neural networks
    Xu Geng
    Jinxiong Gao
    Yonghui Zhang
    Dingtan Xu
    Scientific Reports, 14