Filter pruning by image channel reduction in pre-trained convolutional neural networks

被引:0
|
作者
Gi Su Chung
Chee Sun Won
机构
[1] Dongguk University-Seoul,Department of Electronic and Electrical Engineering
来源
关键词
Network pruning; CNN filter compression; Facial emotion classification; Image channel reduction;
D O I
暂无
中图分类号
学科分类号
摘要
There are domain-specific image classification problems such as facial emotion and house-number classifications, where the color information in the images may not be crucial for recognition. This motivates us to convert RGB images to gray-scale ones with a single Y channel to be fed into the pre-trained convolutional neural networks (CNN). Now, since the existing CNN models are pre-trained by three-channel color images, one can expect that some trained filters are more sensitive to colors than brightness. Therefore, adopting the single-channel gray-scale images as inputs, we can prune out some of the convolutional filters in the first layer of the pre-trained CNN. This first-layer pruning greatly facilitates the filter compression of the subsequent convolutional layers. Now, the pre-trained CNN with the compressed filters is fine-tuned with the single-channel images for a domain-specific dataset. Experimental results on the facial emotion and Street View House Numbers (SVHN) datasets show that we can achieve a significant compression of the pre-trained CNN filters by the proposed method. For example, compared with the fine-tuned VGG-16 model by color images, we can save 10.538 GFLOPs computations, while keeping the classification accuracy around 84% for the facial emotion RAF-DB dataset.
引用
收藏
页码:30817 / 30826
页数:9
相关论文
共 50 条
  • [1] Filter pruning by image channel reduction in pre-trained convolutional neural networks
    Chung, Gi Su
    Won, Chee Sun
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (20) : 30817 - 30826
  • [2] The Impact of Padding on Image Classification by Using Pre-trained Convolutional Neural Networks
    Tang, Hongxiang
    Ortis, Alessandro
    Battiato, Sebastiano
    [J]. IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT II, 2019, 11752 : 337 - 344
  • [3] CONVOLUTIONAL NEURAL NETWORKS FOR OMNIDIRECTIONAL IMAGE QUALITY ASSESSMENT: PRE-TRAINED OR RE-TRAINED?
    Sendjasni, Abderrezzaq
    Larabi, Mohamed-Chaker
    Cheikh, Faouzi Alaya
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3413 - 3417
  • [4] A Filter for SAR Image Despeckling Using Pre-Trained Convolutional Neural Network Model
    Pan, Ting
    Peng, Dong
    Yang, Wen
    Li, Heng-Chao
    [J]. REMOTE SENSING, 2019, 11 (20)
  • [5] Pre-trained Convolutional Neural Networks for the Lung Sounds Classification
    Vaityshyn, Valentyn
    Porieva, Hanna
    Makarenkova, Anastasiia
    [J]. 2019 IEEE 39TH INTERNATIONAL CONFERENCE ON ELECTRONICS AND NANOTECHNOLOGY (ELNANO), 2019, : 522 - 525
  • [6] Convolutional Neural Networks for Histopathology Image Classification: Training vs. Using Pre-Trained Networks
    Kieffer, Brady
    Babaie, Morteza
    Kalra, Shivam
    Tizhoosh, H. R.
    [J]. PROCEEDINGS OF THE 2017 SEVENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA 2017), 2017,
  • [7] Medical Image Classification using Pre-trained Convolutional Neural Networks and Support Vector Machine
    Ahmed, Ali
    [J]. INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (06): : 1 - 6
  • [8] Pre-Trained Convolutional Neural Network for Classification of Tanning Leather Image
    Winiarti, Sri
    Prahara, Adhi
    Murinto
    Ismi, Dewi Pramudi
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2018, 9 (01) : 212 - 217
  • [9] Dynamic Convolutional Neural Networks as Efficient Pre-Trained Audio Models
    Schmid, Florian
    Koutini, Khaled
    Widmer, Gerhard
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2227 - 2241
  • [10] Performance Improvement Of Pre-trained Convolutional Neural Networks For Action Recognition
    Ozcan, Tayyip
    Basturk, Alper
    [J]. COMPUTER JOURNAL, 2021, 64 (11): : 1715 - 1730