MTKDSR: Multi-Teacher Knowledge Distillation for Super Resolution Image Reconstruction

被引:2
|
作者
Yao, Gengqi [1 ]
Li, Zhan [1 ]
Bhanu, Bir [2 ]
Kang, Zhiqing [1 ]
Zhong, Ziyi [1 ]
Zhang, Qingfeng [1 ]
机构
[1] Jinan Univ, Dept Comp Sci, Guangzhou, Peoples R China
[2] Univ Calif Riverside, Dept Elect & Comp Engn, Riverside, CA USA
基金
中国国家自然科学基金;
关键词
CONVOLUTIONAL NETWORK; SUPERRESOLUTION;
D O I
10.1109/ICPR56361.2022.9956250
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the performance of single image super-resolution (SISR) methods based on deep neural networks has significantly improved. However, large model sizes and high computational costs are common problems for most SR networks. Meanwhile, a trade-off exists between higher reconstruction fidelity and improved perceptual quality in solving the SISR problem. In this paper, we propose a multi-teacher knowledge distillation approach for SR (MTKDSR) tasks that can train a balanced, lightweight, and efficient student network using different types of teacher models that are proficient in terms of reconstruction fidelity or perceptual quality. In addition, to generate more realistic and learnable textures, we propose an edge-guided SR network, EdgeSRN, as a perceptual teacher used in the MTKDSR framework. In our experiments, EdgeSRN was superior to the models based on adversarial learning in terms of the ability of effective knowledge transfer. Extensive experiments show that the student trained by MTKDSR exhibit superior performance compared to those of state-of-the-art lightweight SR networks in terms of perceptual quality with a smaller model size and fewer computations. Our code is available at https: //github. com/lizhangray/MTKDSR.
引用
收藏
页码:352 / 358
页数:7
相关论文
共 50 条
  • [1] MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
    Jiang, Yuxuan
    Feng, Chen
    Zhang, Fan
    Bull, David
    COMPUTER VISION - ECCV 2024, PT XXXIX, 2025, 15097 : 364 - 382
  • [2] Correlation Guided Multi-teacher Knowledge Distillation
    Shi, Luyao
    Jiang, Ning
    Tang, Jialiang
    Huang, Xinlei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 562 - 574
  • [3] Reinforced Multi-Teacher Selection for Knowledge Distillation
    Yuan, Fei
    Shou, Linjun
    Pei, Jian
    Lin, Wutao
    Gong, Ming
    Fu, Yan
    Jiang, Daxin
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14284 - 14291
  • [4] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 566 - 570
  • [5] A Multi-Teacher Assisted Knowledge Distillation Approach for Enhanced Face Image Authentication
    Cheng, Tiancong
    Zhang, Ying
    Yin, Yifang
    Zimmermann, Roger
    Yu, Zhiwen
    Guo, Bin
    PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023, 2023, : 135 - 143
  • [6] CONFIDENCE-AWARE MULTI-TEACHER KNOWLEDGE DISTILLATION
    Zhang, Hailin
    Chen, Defang
    Wang, Can
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4498 - 4502
  • [7] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    Neurocomputing, 2021, 415 : 106 - 113
  • [8] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    NEUROCOMPUTING, 2020, 415 : 106 - 113
  • [9] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE Signal Processing Letters, 2024, 31 : 566 - 570
  • [10] Decoupled Multi-teacher Knowledge Distillation based on Entropy
    Cheng, Xin
    Tang, Jialiang
    Zhang, Zhiqiang
    Yu, Wenxin
    Jiang, Ning
    Zhou, Jinjia
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,