Dual Circle Contrastive Learning-Based Blind Image Super-Resolution

被引:4
|
作者
Qiu, Yajun [1 ]
Zhu, Qiang [1 ]
Zhu, Shuyuan [1 ]
Zeng, Bing [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Degradation; Kernel; Superresolution; Task analysis; Training; Estimation; Probabilistic logic; Blind image super-resolution; degradation; extraction; contrastive learning; information distillation; NETWORK;
D O I
10.1109/TCSVT.2023.3297673
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Blind image super-resolution (BISR) aims to construct high-resolution image from low-resolution (LR) image that contains unknown degradation. Although the previous methods demonstrated impressive performance by introducing the degradation representation in BISR task, there still exist two problems in most of them. First, they ignore the degradation characteristics of different image regions when generating degradation representation. Second, they lack effective supervision on the generation of both degradation representation and super-resolution result. To solve these problems, we propose the dual circle contrastive learning (DCCL) with the high-efficiency modules to implement BISR. In our proposed method, we design the degradation extraction network to obtain the degradation representations from different texture regions of LR image. Meanwhile, we propose DCCL coupled with the degrading network to guarantee the obtained degradation representation to contain the degradation of LR image as much as possible. The application of DCCL also makes the SR results contain degradation as little as possible. Additionally, we develop an information distillation module for our proposed BISR model to guarantee the SR images with high quality. The experimental results demonstrate that our proposed method achieves the state-of-the-art BISR performance.
引用
收藏
页码:1757 / 1771
页数:15
相关论文
共 50 条
  • [31] Iterative dual regression network for blind image super-resolution
    Lei, Chunting
    Yang, Sihan
    Yang, Xiaomin
    Yan, Binyu
    Jeon, Gwanggil
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 2437 - 2446
  • [32] A Practical Contrastive Learning Framework for Single-Image Super-Resolution
    Wu, Gang
    Jiang, Junjun
    Liu, Xianming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15834 - 15845
  • [33] A Practical Contrastive Learning Framework for Single-Image Super-Resolution
    Wu, Gang
    Jiang, Junjun
    Liu, Xianming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15834 - 15845
  • [34] Pixel-wise Contrastive Learning for Single Image Super-resolution
    Zhou D.-W.
    Liu Z.-H.
    Liu Y.-K.
    Zidonghua Xuebao/Acta Automatica Sinica, 2024, 50 (01): : 181 - 193
  • [35] Knowledge Distillation for Single Image Super-Resolution via Contrastive Learning
    Liu, Cencen
    Zhang, Dongyang
    Qin, Ke
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 1079 - 1083
  • [36] Dual Back-Projection-Based Internal Learning for Blind Super-Resolution
    Kim, Jonghee
    Jung, Chanho
    Kim, Changick
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 1190 - 1194
  • [37] Contrastive Learning for Blind Super-Resolution via A Distortion-Specific Network
    Wang, Xinya
    Ma, Jiayi
    Jiang, Junjun
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (01) : 78 - 89
  • [38] Contrastive Learning for Blind Super-Resolution via A Distortion-Specific Network
    Xinya Wang
    Jiayi Ma
    Junjun Jiang
    IEEE/CAA Journal of Automatica Sinica, 2023, 10 (01) : 78 - 89
  • [39] Image super-resolution enhancement based on online learning and blind sparse decomposition
    Lu, Jinzheng
    Zhang, Qiheng
    Xu, Zhiyong
    Peng, Zhenming
    MIPPR 2011: PATTERN RECOGNITION AND COMPUTER VISION, 2011, 8004
  • [40] A SINGLE IMAGE BASED BLIND SUPER-RESOLUTION APPROACH
    Zhang, Wei
    Cham, Wai-Kuen
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 329 - 332