Pruning CNN's with Linear Filter Ensembles

被引:3
|
作者
Sandor, Csanad [1 ,2 ]
Pavel, Szabolcs [1 ,2 ]
Csato, Lehel [1 ]
机构
[1] Babes Bolyai Univ, Cluj Napoca, Romania
[2] Robert Bosch SRL, Cluj Napoca, Romania
来源
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2020年 / 325卷
关键词
D O I
10.3233/FAIA200249
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the promising results of convolutional neural networks (CNNs), their application on devices with limited resources is still a big challenge; this is mainly due to the huge memory and computation requirements of the CNN. To counter the limitation imposed by the network size, we use pruning to reduce the network size and - implicitly - the number of floating point operations (FLOPs). Contrary to the filter norm method - used in "conventional" network pruning - based on the assumption that a smaller norm implies "less importance" to its associated component, we develop a novel filter importance norm that is based on the change in the empirical loss caused by the presence or removal of a component from the network architecture. Since there are too many individual possibilities for filter configuration, we repeatedly sample from these architectural components and measure the system performance in the respective state of components being active or disabled. The result is a collection of filter ensembles - filter masks - and associated performance values. We rank the filters based on a linear and additive model and remove the least important ones such that the drop in network accuracy is minimal. We evaluate our method on a fully connected network, as well as on the ResNet architecture trained on the CIFAR-10 dataset. Using our pruning method, we managed to remove 60% of the parameters and 64% of the FLOPs from the ResNet with an accuracy drop of less than 0.6%.
引用
收藏
页码:1435 / 1442
页数:8
相关论文
共 50 条
  • [1] LDP: A Large Diffuse Filter Pruning to Slim the CNN
    Wei, Wenyue
    Wang, Yizhen
    Li, Yun
    Xia, Yinfeng
    Yin, Baoqun
    PROCEEDINGS OF 2022 THE 6TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, ICMLSC 20222, 2022, : 26 - 32
  • [2] Approximated Oracle Filter Pruning for Destructive CNN Width Optimization
    Ding, Xiaohan
    Ding, Guiguang
    Guo, Yuchen
    Han, Jungong
    Yan, Chenggang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [3] Mask-Soft Filter Pruning for Lightweight CNN Inference
    Kim, Nam Joon
    Kim, Hyun
    2020 17TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC 2020), 2020, : 316 - 317
  • [4] Knowledge transferred adaptive filter pruning for CNN compression and acceleration
    Lihua Guo
    Dawu Chen
    Kui Jia
    Science China Information Sciences, 2022, 65
  • [5] Adaptive CNN filter pruning using global importance metric
    Mondal, Milton
    Das, Bishshoy
    Roy, Sumantra Dutta
    Singh, Pushpendra
    Lall, Brejesh
    Joshi, Shiv Dutt
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 222
  • [6] Knowledge transferred adaptive filter pruning for CNN compression and acceleration
    Lihua GUO
    Dawu CHEN
    Kui JIA
    Science China(Information Sciences), 2022, 65 (12) : 315 - 316
  • [7] Knowledge transferred adaptive filter pruning for CNN compression and acceleration
    Guo, Lihua
    Chen, Dawu
    Jia, Kui
    SCIENCE CHINA-INFORMATION SCIENCES, 2022, 65 (12)
  • [8] Analysis and Optimization of CNN-based Super Resolution with Filter Pruning
    Jeong, Jonghun
    Yang, Hoeseok
    2019 10TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC): ICT CONVERGENCE LEADING THE AUTONOMOUS FUTURE, 2019, : 221 - 223
  • [9] Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN
    Hao, Tianxiang
    Ding, Xiaohan
    Han, Jungong
    Guo, Yuchen
    Ding, Guiguang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16831 - 16844
  • [10] A Filter Pruning Method of CNN Models Based on Feature Maps Clustering
    Wu, Zhihong
    Li, Fuxiang
    Zhu, Yuan
    Lu, Ke
    Wu, Mingzhi
    Zhang, Changze
    APPLIED SCIENCES-BASEL, 2022, 12 (09):