Pruning by explaining: A novel criterion for deep neural network pruning

被引:111
|
作者
Yeom, Seul-Ki [1 ,9 ]
Seegerer, Philipp [1 ,8 ]
Lapuschkin, Sebastian [3 ]
Binder, Alexander [4 ,5 ]
Wiedemann, Simon [3 ]
Mueller, Klaus-Robert [1 ,2 ,6 ,7 ]
Samek, Wojciech [2 ,3 ]
机构
[1] Tech Univ Berlin, Machine Learning Grp, D-10587 Berlin, Germany
[2] BIFOLD Berlin Inst Fdn Learning & Data, Berlin, Germany
[3] Fraunhofer Heinrich Hertz Inst, Dept Artificial Intelligence, D-10587 Berlin, Germany
[4] Singapore Univ Technol & Design, ISTD Pillar, Singapore 487372, Singapore
[5] Univ Oslo, Dept Informat, N-0373 Oslo, Norway
[6] Korea Univ, Dept Artificial Intelligence, Seoul 136713, South Korea
[7] Max Planck Inst Informat, D-66123 Saarbrucken, Germany
[8] Aignost GmbH, D-10557 Berlin, Germany
[9] Nota AI GmbH, D-10117 Berlin, Germany
关键词
Pruning; Layer-wise relevance propagation (LRP); Convolutional neural network (CNN); Interpretation of models; COMPRESSION;
D O I
10.1016/j.patcog.2021.107899
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sig-nificant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning. (c) 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
引用
收藏
页数:14
相关论文
共 50 条
  • [31] DEEP NETWORK PRUNING FOR OBJECT DETECTION
    Ghosh, Sanjukta
    Srinivasa, Shashi K. K.
    Amon, Peter
    Hutter, Andreas
    Kaup, Andre
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3915 - 3919
  • [32] Neural network pruning for function approximation
    Setiono, R
    Gaweda, A
    [J]. IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL VI, 2000, : 443 - 448
  • [33] Importance Estimation for Neural Network Pruning
    Molchanov, Pavlo
    Mallya, Arun
    Tyree, Stephen
    Frosio, Iuri
    Kautz, Jan
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11256 - 11264
  • [34] Neural network pruning and hardware acceleration
    Jeong, Taehee
    Ghasemi, Ehsam
    Tuyls, Jorn
    Delaye, Elliott
    Sirasao, Ashish
    [J]. 2020 IEEE/ACM 13TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC 2020), 2020, : 440 - 445
  • [35] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
    Luo, Jian-Hao
    Wu, Jianxin
    Lin, Weiyao
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5068 - 5076
  • [36] Group Pruning with Group Sparse Regularization for Deep Neural Network Compression
    Wu, Chenglu
    Pang, Wei
    Liu, Hao
    Lu, Shengli
    [J]. 2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 325 - 329
  • [37] Deep Neural Network Channel Pruning Compression Method for Filter Elasticity
    Li, Ruiquan
    Zhu, Lu
    Liu, Yuanyuan
    [J]. Computer Engineering and Applications, 2024, 60 (06) : 163 - 171
  • [38] A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations
    Cheng, Hongrong
    Zhang, Miao
    Shi, Javen Qinfeng
    [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (12) : 10558 - 10578
  • [39] Fused Pruning based Robust Deep Neural Network Watermark Embedding
    Li, Tengfei
    Wang, Shuo
    Jing, Huiyun
    Lian, Zhichao
    Meng, Shunmei
    Li, Qianmu
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2475 - 2481
  • [40] Deep Neural Network Compression by In-Parallel Pruning-Quantization
    Tung, Frederick
    Mori, Greg
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (03) : 568 - 579