Compression of Convolutional Neural Networks With Divergent Representation of Filters

被引:1
|
作者
Lei, Peng [1 ]
Liang, Jiawei [1 ]
Zheng, Tong [1 ]
Wang, Jun [1 ]
机构
[1] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
关键词
Convolutional neural networks; Kernel; Adaptation models; Training; Hardware; Taylor series; Learning systems; Convolutional neural networks (CNNs); divergent representation; filter-level pruning; sparsity constraints;
D O I
10.1109/TNNLS.2022.3201846
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have made remarkable achievements in many tasks. However, most of them are hardly applied to embedded systems directly because of the requirement of huge memory space and computing power. In this article, we propose a pruning framework, namely, FiltDivNet, to accelerate and compress CNN models for their applicability to small or portable devices. The correlations among filters are taken into account and measured by the goodness of fit. On this basis, a hybrid-cluster pruning strategy is designed with dynamic pruning ratios for different clusters in CNN models. It aims at representing its filters in their diversity by removing redundant ones cluster by cluster. In addition, a new loss function with adaptive sparsity constraints is introduced for the retraining and fine-tuning in the FiltDivNet. Finally, some comparative experiments based on classical CNN models are carried out to demonstrate its effectiveness in compression performance and its adaptability with different CNN architectures.
引用
收藏
页码:4125 / 4137
页数:13
相关论文
共 50 条
  • [1] Evaluating the Compression Efficiency of the Filters in Convolutional Neural Networks
    Osawa, Kazuki
    Yokota, Rio
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 459 - 466
  • [2] Convolutional Neural Networks with Recurrent Neural Filters
    Yang, Yi
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 912 - 917
  • [3] Symmetrical filters in convolutional neural networks
    Dzhezyan, Gregory
    Cecotti, Hubert
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (07) : 2027 - 2039
  • [4] Correlative Filters for Convolutional Neural Networks
    Chen, Peiqiu
    Wang, Hanli
    Wu, Jun
    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 3042 - 3047
  • [5] Symmetrical filters in convolutional neural networks
    Gregory Dzhezyan
    Hubert Cecotti
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 2027 - 2039
  • [6] Deep Convolutional Neural Networks Compression Method Based on Linear Representation of Kernels
    Chen, Ruobing
    Chen, Yefei
    Su, Jianbo
    ELEVENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2018), 2019, 11041
  • [7] Compressing Convolutional Neural Networks via Factorized Convolutional Filters
    Li, Tuanhui
    Wu, Baoyuan
    Yang, Yujiu
    Fan, Yanbo
    Zhang, Yong
    Liu, Wei
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3972 - 3981
  • [8] Convolutional Neural Networks with analytically determined Filters
    Kissel, Matthias
    Diepold, Klaus
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [9] Learning to Prune Filters in Convolutional Neural Networks
    Huang, Qiangui
    Zhou, Kevin
    You, Suya
    Neumann, Ulrich
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 709 - 718
  • [10] Deterministic Binary Filters for Convolutional Neural Networks
    Tseng, Vincent W-S
    Bhattachara, Sourav
    Fernandez-Marques, Javier
    Alizadeh, Milad
    Tong, Catherine
    Lane, Nicholas D.
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2739 - 2747