MTKDSR: Multi-Teacher Knowledge Distillation for Super Resolution Image Reconstruction

被引:2
|
作者
Yao, Gengqi [1 ]
Li, Zhan [1 ]
Bhanu, Bir [2 ]
Kang, Zhiqing [1 ]
Zhong, Ziyi [1 ]
Zhang, Qingfeng [1 ]
机构
[1] Jinan Univ, Dept Comp Sci, Guangzhou, Peoples R China
[2] Univ Calif Riverside, Dept Elect & Comp Engn, Riverside, CA USA
基金
中国国家自然科学基金;
关键词
CONVOLUTIONAL NETWORK; SUPERRESOLUTION;
D O I
10.1109/ICPR56361.2022.9956250
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the performance of single image super-resolution (SISR) methods based on deep neural networks has significantly improved. However, large model sizes and high computational costs are common problems for most SR networks. Meanwhile, a trade-off exists between higher reconstruction fidelity and improved perceptual quality in solving the SISR problem. In this paper, we propose a multi-teacher knowledge distillation approach for SR (MTKDSR) tasks that can train a balanced, lightweight, and efficient student network using different types of teacher models that are proficient in terms of reconstruction fidelity or perceptual quality. In addition, to generate more realistic and learnable textures, we propose an edge-guided SR network, EdgeSRN, as a perceptual teacher used in the MTKDSR framework. In our experiments, EdgeSRN was superior to the models based on adversarial learning in terms of the ability of effective knowledge transfer. Extensive experiments show that the student trained by MTKDSR exhibit superior performance compared to those of state-of-the-art lightweight SR networks in terms of perceptual quality with a smaller model size and fewer computations. Our code is available at https: //github. com/lizhangray/MTKDSR.
引用
收藏
页码:352 / 358
页数:7
相关论文
共 50 条
  • [31] Visual emotion analysis using skill-based multi-teacher knowledge distillation
    Cladiere, Tristan
    Alata, Olivier
    Ducottet, Christophe
    Konik, Hubert
    Legrand, Anne-Claire
    PATTERN ANALYSIS AND APPLICATIONS, 2025, 28 (02)
  • [32] mKDNAD: A network flow anomaly detection method based on multi-teacher knowledge distillation
    Yang, Yang
    Liu, Dan
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 314 - 319
  • [33] A Multi-teacher Knowledge Distillation Framework for Distantly Supervised Relation Extraction with Flexible Temperature
    Fei, Hongxiao
    Tan, Yangying
    Huang, Wenti
    Long, Jun
    Huang, Jincai
    Yang, Liu
    WEB AND BIG DATA, PT II, APWEB-WAIM 2023, 2024, 14332 : 103 - 116
  • [34] Named Entity Recognition Method Based on Multi-Teacher Collaborative Cyclical Knowledge Distillation
    Jin, Chunqiao
    Yang, Shuangyuan
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 230 - 235
  • [35] Dual cross knowledge distillation for image super-resolution
    Fang, Hangxiang
    Long, Yongwen
    Hu, Xinyi
    Ou, Yangtao
    Huang, Yuanjia
    Hu, Haoji
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [36] Continual Learning with Confidence-based Multi-teacher Knowledge Distillation for Neural Machine Translation
    Guo, Jiahua
    Liang, Yunlong
    Xu, Jinan
    2024 6TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING, ICNLP 2024, 2024, : 336 - 343
  • [37] Enhanced Accuracy and Robustness via Multi-teacher Adversarial Distillation
    Zhao, Shiji
    Yu, Jie
    Sun, Zhenlong
    Zhang, Bo
    Wei, Xingxing
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 585 - 602
  • [38] UNIC: Universal Classification Models via Multi-teacher Distillation
    Sariyildiz, Mert Bulent
    Weinzaepfel, Philippe
    Lucas, Thomas
    Larlus, Diane
    Kalantidis, Yannis
    COMPUTER VISION-ECCV 2024, PT IV, 2025, 15062 : 353 - 371
  • [39] LGFA-MTKD: Enhancing Multi-Teacher Knowledge Distillation with Local and Global Frequency Attention
    Cheng, Xin
    Zhou, Jinjia
    INFORMATION, 2024, 15 (11)
  • [40] MT4MTL-KD: A Multi-Teacher Knowledge Distillation Framework for Triplet Recognition
    Gui, Shuangchun
    Wang, Zhenkun
    Chen, Jixiang
    Zhou, Xun
    Zhang, Chen
    Cao, Yi
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (04) : 1628 - 1639