Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

被引:0
|
作者
You, Zhonghui [1 ]
Yan, Kun [1 ]
Ye, Jinmian [2 ]
Ma, Meng [3 ]
Wang, Ping [1 ,3 ,4 ]
机构
[1] Peking Univ, Sch Software & Microelect, Beijing, Peoples R China
[2] Momenta, Beijing, Peoples R China
[3] Peking Univ, Natl Engn Res Ctr Software Engn, Beijing, Peoples R China
[4] Minist Educ, Key Lab High Confidence Software Technol PKU, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning is one of the most effective ways to accelerate and compress convolutional neural networks (CNNs). In this work, we propose a global filter pruning algorithm called Gate Decorator, which transforms a vanilla CNN module by multiplying its output by the channel-wise scaling factors (i.e. gate). When the scaling factor is set to zero, it is equivalent to removing the corresponding filter. We use Taylor expansion to estimate the change in the loss function caused by setting the scaling factor to zero and use the estimation for the global filter importance ranking. Then we prune the network by removing those unimportant filters. After pruning, we merge all the scaling factors into its original module, so no special operations or structures are introduced. Moreover, we propose an iterative pruning framework called Tick-Tock to improve pruning accuracy. The extensive experiments demonstrate the effectiveness of our approaches. For example, we achieve the state-of-the-art pruning ratio on ResNet-56 by reducing 70% FLOPs without noticeable loss in accuracy. For ResNet-50 on ImageNet, our pruned model with 40% FLOPs reduction outperforms the baseline model by 0.31% in top-1 accuracy. Various datasets are used, including CIFAR-10, CIFAR-100, CUB-200, ImageNet ILSVRC-12 and PASCAL VOC 2011.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
    He, Yang
    Kang, Guoliang
    Dong, Xuanyi
    Fu, Yanwei
    Yang, Yi
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2234 - 2240
  • [2] Incremental Filter Pruning via Random Walk for Accelerating Deep Convolutional Neural Networks
    Li, Qinghua
    Li, Cuiping
    Chen, Hong
    [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 358 - 366
  • [3] Accelerating Convolutional Networks via Global & Dynamic Filter Pruning
    Lin, Shaohui
    Ji, Rongrong
    Li, Yuchao
    Wu, Yongjian
    Huang, Feiyue
    Zhang, Baochang
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2425 - 2432
  • [4] Soft Taylor Pruning for Accelerating Deep Convolutional Neural Networks
    Rong, Jintao
    Yu, Xiyi
    Zhang, Mingyang
    Ou, Linlin
    [J]. IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 5343 - 5349
  • [5] Tutor-Instructing Global Pruning for Accelerating Convolutional Neural Networks
    Yu, Fang
    Cui, Li
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2792 - 2799
  • [6] Filter Pruning via Probabilistic Model-based Optimization for Accelerating Deep Convolutional Neural Networks
    Li, Qinghua
    Li, Cuiping
    Chen, Hong
    [J]. WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 653 - 661
  • [7] FP-AGL: Filter Pruning With Adaptive Gradient Learning for Accelerating Deep Convolutional Neural Networks
    Kim, Nam Joon
    Kim, Hyun
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5279 - 5290
  • [8] Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
    He, Yang
    Dong, Xuanyi
    Kang, Guoliang
    Fu, Yanwei
    Yan, Chenggang
    Yang, Yi
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (08) : 3594 - 3604
  • [9] Complex hybrid weighted pruning method for accelerating convolutional neural networks
    Xu Geng
    Jinxiong Gao
    Yonghui Zhang
    Dingtan Xu
    [J]. Scientific Reports, 14
  • [10] Complex hybrid weighted pruning method for accelerating convolutional neural networks
    Geng, Xu
    Gao, Jinxiong
    Zhang, Yonghui
    Xu, Dingtan
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01)