A "Network Pruning Network" Approach to Deep Model Compression

被引:0
|
作者
Verma, Vinay Kumar [1 ]
Singh, Pravendra [1 ]
Namboodiri, Vinay P. [1 ]
Rai, Piyush [1 ]
机构
[1] IIT Kanpur, Dept Comp Sci & Engn, Kanpur, Uttar Pradesh, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a filter pruning approach for deep model compression, using a multitask network. Our approach is based on learning a a pruner network to prune a pre-trained target network. The pruner is essentially a multitask deep neural network with binary outputs that help identify the filters from each layer of the original network that do not have any significant contribution to the model and can therefore be pruned. The pruner network has the same architecture as the original network except that it has a multitask/multi-output last layer containing binary-valued outputs (one per filter), which indicate which filters have to be pruned. The pruner's goal is to minimize the number of filters from the original network by assigning zero weights to the corresponding output feature-maps. In contrast to most of the existing methods, instead of relying on iterative pruning, our approach can prune the network (original network) in one go and, moreover, does not require specifying the degree of pruning for each layer (and can learn it instead). The compressed model produced by our approach is generic and does not need any special hardware/software support. Moreover, augmenting with other methods such as knowledge distillation, quantization, and connection pruning can increase the degree of compression for the proposed approach. We show the efficacy of our proposed approach for classification and object detection tasks.
引用
收藏
页码:2998 / 3007
页数:10
相关论文
共 50 条
  • [1] Discrimination-Aware Network Pruning for Deep Model Compression
    Liu, Jing
    Zhuang, Bohan
    Zhuang, Zhuangwei
    Guo, Yong
    Huang, Junzhou
    Zhu, Jinhui
    Tan, Mingkui
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (08) : 4035 - 4051
  • [2] Automated Pruning for Deep Neural Network Compression
    Manessi, Franco
    Rozza, Alessandro
    Bianco, Simone
    Napoletano, Paolo
    Schettini, Raimondo
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 657 - 664
  • [3] Deep semantic image compression via cooperative network pruning
    Luo, Sihui
    Fang, Gongfan
    Song, Mingli
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [4] Model Compression Based on Differentiable Network Channel Pruning
    Zheng, Yu-Jie
    Chen, Si-Bao
    Ding, Chris H. Q.
    Luo, Bin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10203 - 10212
  • [5] Data-Free Network Pruning for Model Compression
    Tang, Jialiang
    Liu, Mingjin
    Jiang, Ning
    Cai, Huan
    Yu, Wenxin
    Zhou, Jinjia
    [J]. 2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [6] EEG Model Compression by Network Pruning for Emotion Recognition
    Rao, Wenjie
    Zhong, Sheng-hua
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [7] A Discriminant Information Approach to Deep Neural Network Pruning
    Hou, Zejiang
    Kung, Sun-Yuan
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9553 - 9560
  • [8] Deep Neural Network Compression by In-Parallel Pruning-Quantization
    Tung, Frederick
    Mori, Greg
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (03) : 568 - 579
  • [9] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
    Luo, Jian-Hao
    Wu, Jianxin
    Lin, Weiyao
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5068 - 5076
  • [10] Group Pruning with Group Sparse Regularization for Deep Neural Network Compression
    Wu, Chenglu
    Pang, Wei
    Liu, Hao
    Lu, Shengli
    [J]. 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 325 - 329