Dynamic image super-resolution via progressive contrastive self-distillation

被引:0
|
作者
Zhang, Zhizhong [1 ,2 ]
Xie, Yuan [1 ,2 ]
Zhang, Chong [2 ]
Wang, Yanbo [2 ]
Qu, Yanyun [3 ]
Lin, Shaohui [2 ]
Ma, Lizhuang [2 ]
Tian, Qi [4 ]
机构
[1] Ningde Normal Univ, Ningde, Peoples R China
[2] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200000, Peoples R China
[3] Xiamen Univ, Dept Comp Sci, Xiamen 361005, Peoples R China
[4] Huawei Noahs Ark Lab, Huawei, Peoples R China
基金
上海市自然科学基金;
关键词
Single Image Super-Resolution; Model compression; Model acceleration; Dynamic neural networks; NETWORK;
D O I
10.1016/j.patcog.2024.110502
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are highly successful for image super -resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various offthe-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel -splitting super -resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
    Wang, Yanbo
    Lin, Shaohui
    Qu, Yanyun
    Wu, Haiyan
    Zhang, Zhizhong
    Xie, Yuan
    Yao, Angela
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1122 - 1128
  • [2] Towards Elastic Image Super-Resolution Network via Progressive Self-distillation
    Yu, Xin'an
    Zhang, Dongyang
    Liu, Cencen
    Dong, Qiang
    Duan, Guiduo
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VIII, 2025, 15038 : 137 - 150
  • [3] Adjustable super-resolution network via deep supervised learning and progressive self-distillation
    Li, Juncheng
    Fang, Faming
    Zeng, Tieyong
    Zhang, Guixu
    Wang, Xizhao
    NEUROCOMPUTING, 2022, 500 : 379 - 393
  • [4] Semantic Super-Resolution via Self-Distillation and Adversarial Learning
    Park, Hanhoon
    IEEE ACCESS, 2024, 12 : 2361 - 2370
  • [6] Knowledge Distillation for Single Image Super-Resolution via Contrastive Learning
    Liu, Cencen
    Zhang, Dongyang
    Qin, Ke
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 1079 - 1083
  • [7] Infrared Image Super-Resolution via Progressive Compact Distillation Network
    Fan, Kefeng
    Hong, Kai
    Li, Fei
    ELECTRONICS, 2021, 10 (24)
  • [8] Image super-resolution via dynamic network
    Tian, Chunwei
    Zhang, Xuanyu
    Zhang, Qi
    Yang, Mingming
    Ju, Zhaojie
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024, 9 (04) : 837 - 849
  • [9] Feature-Domain Adaptive Contrastive Distillation for Efficient Single Image Super-Resolution
    Moon, Hyeon-Cheol
    Kim, Jae-Gon
    Jeong, Jinwoo
    Kim, Sungjei
    IEEE ACCESS, 2023, 11 : 131885 - 131896
  • [10] Image Super-resolution via Progressive Cascading Residual Network
    Ahn, Namhyuk
    Kang, Byungkon
    Sohn, Kyung-Ah
    PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 904 - 912