Dynamic image super-resolution via progressive contrastive self-distillation

被引:0
|
作者
Zhang, Zhizhong [1 ,2 ]
Xie, Yuan [1 ,2 ]
Zhang, Chong [2 ]
Wang, Yanbo [2 ]
Qu, Yanyun [3 ]
Lin, Shaohui [2 ]
Ma, Lizhuang [2 ]
Tian, Qi [4 ]
机构
[1] Ningde Normal Univ, Ningde, Peoples R China
[2] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200000, Peoples R China
[3] Xiamen Univ, Dept Comp Sci, Xiamen 361005, Peoples R China
[4] Huawei Noahs Ark Lab, Huawei, Peoples R China
基金
上海市自然科学基金;
关键词
Single Image Super-Resolution; Model compression; Model acceleration; Dynamic neural networks; NETWORK;
D O I
10.1016/j.patcog.2024.110502
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) are highly successful for image super -resolution (SR). However, they often require sophisticated architectures with high memory cost and computational overhead, significantly restricting their practical deployments on resource-limited devices. In this paper, we propose a novel dynamic contrastive self-distillation (Dynamic-CSD) framework to simultaneously compress and accelerate various offthe-shelf SR models, and explore using the trained model for dynamic inference. In particular, to build a compact student network, a channel -splitting super -resolution network (CSSR-Net) can first be constructed from a target teacher network. Then, we propose a novel contrastive loss to improve the quality of SR images via explicit knowledge transfer. Furthermore, progressive CSD (Pro-CSD) is developed to extend the two-branch CSSR-Net into multi-branch, leading to a switchable model at runtime. Finally, a difficulty-aware branch selection strategy for dynamic inference is given. Extensive experiments demonstrate that the proposed Dynamic-CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Single Image Super-Resolution via Wide-Activation Feature Distillation Network
    Su, Zhen
    Wang, Yuze
    Ma, Xiang
    Sun, Mang
    Cheng, Deqiang
    Li, Chao
    Jiang, He
    SENSORS, 2024, 24 (14)
  • [32] Infrared image super-resolution via transformed self-similarity
    Qi, Wei
    Han, Jing
    Zhang, Yi
    Bai, Lian-fa
    INFRARED PHYSICS & TECHNOLOGY, 2017, 81 : 89 - 96
  • [33] A Progressive Decoupled Network for Blind Image Super-Resolution
    Luo, Laigan
    Yi, Benshun
    Zhu, Chao
    IEEE ACCESS, 2024, 12 : 53818 - 53827
  • [34] Progressive Attentional Learning for Underwater Image Super-Resolution
    Chen, Xuelei
    Wei, Shiqing
    Yi, Chao
    Quan, Lingwei
    Lu, Cunyue
    INTELLIGENT ROBOTICS AND APPLICATIONS, 2020, 12595 : 233 - 243
  • [35] Tolerant Self-Distillation for image classification
    Liu, Mushui
    Yu, Yunlong
    Ji, Zhong
    Han, Jungong
    Zhang, Zhongfei
    NEURAL NETWORKS, 2024, 174
  • [36] FEDSR: Federated Learning for Image Super-Resolution via detail-assisted contrastive learning
    Yang, Yue
    Ren, Xiaodong
    Ke, Liangjun
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [37] GSDD: Generative Space Dataset Distillation for Image Super-resolution
    Zhang, Haiyu
    Su, Shaolin
    Zhu, Yu
    Sun, Jinqiu
    Zhang, Yanning
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 7069 - 7077
  • [38] Image classification based on self-distillation
    Yuting Li
    Linbo Qing
    Xiaohai He
    Honggang Chen
    Qiang Liu
    Applied Intelligence, 2023, 53 : 9396 - 9408
  • [39] SDDA: A progressive self-distillation with decoupled alignment for multimodal image–text classification
    Chen, Xiaohao
    Shuai, Qianjun
    Hu, Feng
    Cheng, Yongqiang
    Neurocomputing, 2025, 614
  • [40] Data-Free Knowledge Distillation For Image Super-Resolution
    Zhang, Yiman
    Chen, Hanting
    Chen, Xinghao
    Deng, Yiping
    Xu, Chunjing
    Wang, Yunhe
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7848 - 7857