Pruning by explaining: A novel criterion for deep neural network pruning

被引:111
|
作者
Yeom, Seul-Ki [1 ,9 ]
Seegerer, Philipp [1 ,8 ]
Lapuschkin, Sebastian [3 ]
Binder, Alexander [4 ,5 ]
Wiedemann, Simon [3 ]
Mueller, Klaus-Robert [1 ,2 ,6 ,7 ]
Samek, Wojciech [2 ,3 ]
机构
[1] Tech Univ Berlin, Machine Learning Grp, D-10587 Berlin, Germany
[2] BIFOLD Berlin Inst Fdn Learning & Data, Berlin, Germany
[3] Fraunhofer Heinrich Hertz Inst, Dept Artificial Intelligence, D-10587 Berlin, Germany
[4] Singapore Univ Technol & Design, ISTD Pillar, Singapore 487372, Singapore
[5] Univ Oslo, Dept Informat, N-0373 Oslo, Norway
[6] Korea Univ, Dept Artificial Intelligence, Seoul 136713, South Korea
[7] Max Planck Inst Informat, D-66123 Saarbrucken, Germany
[8] Aignost GmbH, D-10557 Berlin, Germany
[9] Nota AI GmbH, D-10117 Berlin, Germany
关键词
Pruning; Layer-wise relevance propagation (LRP); Convolutional neural network (CNN); Interpretation of models; COMPRESSION;
D O I
10.1016/j.patcog.2021.107899
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sig-nificant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning. (c) 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Pruning the deep neural network by similar function
    Liu, Hanqing
    Xin, Bo
    Mu, Senlin
    Zhu, Zhangqing
    [J]. 2018 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS AND CONTROL ENGINEERING (ISPECE 2018), 2019, 1187
  • [2] Automated Pruning for Deep Neural Network Compression
    Manessi, Franco
    Rozza, Alessandro
    Bianco, Simone
    Napoletano, Paolo
    Schettini, Raimondo
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 657 - 664
  • [3] Overview of Deep Convolutional Neural Network Pruning
    Li, Guang
    Liu, Fang
    Xia, Yuping
    [J]. 2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [4] Pruning by Training: A Novel Deep Neural Network Compression Framework for Image Processing
    Tian, Guanzhong
    Chen, Jun
    Zeng, Xianfang
    Liu, Yong
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 344 - 348
  • [5] A Discriminant Information Approach to Deep Neural Network Pruning
    Hou, Zejiang
    Kung, Sun-Yuan
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9553 - 9560
  • [6] Pruning and quantization for deep neural network acceleration: A survey
    Liang, Tailin
    Glossner, John
    Wang, Lei
    Shi, Shaobo
    Zhang, Xiaotong
    [J]. NEUROCOMPUTING, 2021, 461 : 370 - 403
  • [7] Deep Neural Network Pruning Using Persistent Homology
    Watanabe, Satoru
    Yamana, Hayato
    [J]. 2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 153 - 156
  • [8] ScoringNet: A Neural Network Based Pruning Criteria for Structured Pruning
    Wang S.
    Zhang Z.
    [J]. Scientific Programming, 2023, 2023
  • [9] Differentiable channel pruning guided via attention mechanism: a novel neural network pruning approach
    Hanjing Cheng
    Zidong Wang
    Lifeng Ma
    Zhihui Wei
    Fawaz E. Alsaadi
    Xiaohui Liu
    [J]. Complex & Intelligent Systems, 2023, 9 : 5611 - 5624
  • [10] Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
    Good, Aidan
    Lin, Jiaqi
    Yu, Xin
    Sieg, Hannah
    Ferguson, Mikey
    Zhe, Shandian
    Wieczorek, Jerzy
    Serra, Thiago
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,