A Layer-Wise Extreme Network Compression for Super Resolution

被引:0
|
作者
Hwang, Jiwon [1 ]
Uddin, A. F. M. Shahab [1 ]
Bae, Sung-Ho [1 ]
机构
[1] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
基金
新加坡国家研究基金会;
关键词
Quantization (signal); Image coding; Image reconstruction; Adaptation models; Degradation; Convolution; Task analysis; Single image super resolution; model compression; layer-wise; quantization; pruning; joint learning; reinforcement learning;
D O I
10.1109/ACCESS.2021.3090404
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) for single image super-resolution (SISR) tend to have large model size and high computation complexity to achieve promising restoration performance. Unlike image classification, model compression for SISR has rarely been studied. In this paper, we found out that DNNs for image classification and SISR have often different characteristics in terms of layer importance. That is, contrary to the DNNs for image classification, the performance of SISR networks hardly decrease even if a few layers are eliminated during inference. This is due to the fact that they typically consist of a bunch of hierarchical and complex residual connections. Based on that key observation, we propose a layer-wise extreme network compression method for SISR. The proposed method consists of: i) reinforcement learning based joint framework for layer-wise quantization and pruning both of which are effectively incorporated into the search space; ii) a progressive preserve ratio scheduling that reflects importance in each layer more effectively, yielding much higher compression efficiency. Our comprehensive experiments show that the proposed method can effectively be applied to the existing SISR networks, thus extremely reducing the model size up to 97% (i.e., 1 bit per weight on average) with marginal performance degradation compared to the corresponding full-precision models.
引用
收藏
页码:93998 / 94009
页数:12
相关论文
共 50 条
  • [1] Layer-Wise Network Compression Using Gaussian Mixture Model
    Lee, Eunho
    Hwang, Youngbae
    [J]. ELECTRONICS, 2021, 10 (01) : 1 - 16
  • [2] Sensitivity-Oriented Layer-Wise Acceleration and Compression for Convolutional Neural Network
    Zhou, Wei
    Niu, Yue
    Zhang, Guanwen
    [J]. IEEE ACCESS, 2019, 7 : 38264 - 38272
  • [3] eXtreme Federated Learning (XFL): a layer-wise approach
    El Mokadem, Rachid
    Ben Maissa, Yann
    El Akkaoui, Zineb
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (05): : 5741 - 5754
  • [4] Layer-Wise Data-Free CNN Compression
    Horton, Maxwell
    Jin, Yanzi
    Farhadi, Ali
    Rastegari, Mohammad
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2019 - 2026
  • [5] FLEXIBLE NETWORK BINARIZATION WITH LAYER-WISE PRIORITY
    Wang, He
    Xu, Yi
    Ni, Bingbing
    Zhuang, Lixue
    Xu, Hongteng
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2346 - 2350
  • [6] LAD: Layer-Wise Adaptive Distillation for BERT Model Compression
    Lin, Ying-Jia
    Chen, Kuan-Yu
    Kao, Hung-Yu
    [J]. SENSORS, 2023, 23 (03)
  • [7] A Layer-Wise Ensemble Technique for Binary Neural Network
    Xi, Jiazhen
    Yamauchi, Hiroyuki
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2021, 35 (08)
  • [8] Layer-Wise Invertibility for Extreme Memory Cost Reduction of CNN Training
    Hascoet, Tristan
    Febvre, Quentin
    Zhuang, Weihao
    Ariki, Yasuo
    Takiguchi, Tetusya
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 2049 - 2052
  • [9] Network with Sub-networks: Layer-wise Detachable Neural Network
    Fuengfusin, Ninnart
    Tamukoh, Hakaru
    [J]. JOURNAL OF ROBOTICS NETWORKING AND ARTIFICIAL LIFE, 2021, 7 (04): : 240 - 244
  • [10] The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network Architectures
    Li, Yawei
    Li, Wen
    Danelljan, Martin
    Zhang, Kai
    Gu, Shuhang
    Van Gool, Luc
    Timofte, Radu
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2144 - 2153