SeNPIS: Sequential Network Pruning by class-wise Importance Score

被引:7
|
作者
Pachon, Cesar G. [1 ]
Ballesteros, Dora M. [1 ]
Renza, Diego [1 ]
机构
[1] Univ Mil Nueva Granada, Carrera 11 101-80, Bogota 110111, Colombia
关键词
Deep learning; Model compression; Pruning algorithm; Importance score; Convolutional neural network;
D O I
10.1016/j.asoc.2022.109558
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last decade, pattern recognition and decision making from images has mainly focused on the development of deep learning architectures, with different types of networks such as sequential, residual and parallel. Although the depth and size varies between models, they all have in common that they can contain multiple filters or neurons that are not important for the purpose of prediction, and that do negatively impact the size of the model and their inference times. Therefore, it is advantageous to use pruning methods that, while largely maintaining the initial performance of the classifier, significantly reduce its size and FLOPs. In parameter reduction, the decision rule is generally based on mathematical criteria, e.g. the amplitude of the weights, but not on the actual impact of the filter or neuron on the classifier performance for each of the classes. Therefore, we propose SeNPIS as a method that involves both filter and neuron selection based on a class-wise importance score, and network resizing to increase parameter reduction and FLOPs in sequential CNN networks. Several tests were performed to compare SeNPIS with other representative state-of-the-art methods, for the CIFAR-10 and Scene-15 datasets. It was found that for similar values of accuracy, and even in some cases with a slight increase in accuracy, SeNPIS significantly reduces the number of parameters by up to an additional 23.5% (i.e., a 51.05% reduction with SeNPIS versus a 27.53% reduction with Gradient) and FLOPs by up to an additional 26.6% (i.e., a 74.82% reduction with SeNPIS versus a 48.16% reduction with Weight) compared to the Weight, Taylor, Gradient and LRP methods.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] GeoHard: Towards Measuring Class-wise Hardness through Modelling Class Semantics
    Cai, Fengyu
    Zhao, Xinran
    Zhang, Hongming
    Gurevych, Iryna
    Koeppl, Heinz
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5571 - 5597
  • [32] Neural Networks Classify through the Class-Wise Means of Their Representations
    Seddik, Mohamed El Amine
    Tamaazousti, Mohamed
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8204 - 8211
  • [33] Extensions of LDA by PCA mixture model and class-wise features
    Kim, HC
    Kim, D
    Bang, SY
    8TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING, VOLS 1-3, PROCEEDING, 2001, : 387 - 392
  • [34] Unsupervised Domain Adaptation Using Robust Class-Wise Matching
    Zhang, Lei
    Wang, Peng
    Wei, Wei
    Lu, Hao
    Shen, Chunhua
    van den Hengel, Anton
    Zhang, Yanning
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (05) : 1339 - 1349
  • [35] Alleviating Class-Wise Gradient Imbalance for Pulmonary Airway Segmentation
    Zheng, Hao
    Qin, Yulei
    Gu, Yun
    Xie, Fangfang
    Yang, Jie
    Sun, Jiayuan
    Yang, Guang-Zhong
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (09) : 2452 - 2462
  • [36] Extensions of LDA by PCA mixture model and class-wise features
    Kim, HC
    Kim, D
    Bang, SY
    PATTERN RECOGNITION, 2003, 36 (05) : 1095 - 1105
  • [37] Instance-wise or Class-wise? A Tale of Neighbor Shapley for Concept-based Explanation
    Li, Jiahui
    Kuang, Kun
    Li, Lin
    Chen, Long
    Zhang, Songyang
    Shao, Jian
    Xiao, Jun
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3664 - 3672
  • [38] CLASS-WISE FM-NMS FOR KNOWLEDGE DISTILLATION OF OBJECT DETECTION
    Liu, Lyuzhuang
    Hirakawa, Tsubasa
    Yamashita, Takayoshi
    Fujiyoshi, Hironobu
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1641 - 1645
  • [39] Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy
    Benz, Philipp
    Zhang, Chaoning
    Karjauv, Adil
    Kweon, In So
    NEURIPS 2020 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 148, 2020, 148 : 325 - 342
  • [40] Bi-directional class-wise adversaries for unsupervised domain adaptation
    Yang, Guanglei
    Ding, Mingli
    Zhang, Yongqiang
    APPLIED INTELLIGENCE, 2022, 52 (04) : 3623 - 3639