MTKDSR: Multi-Teacher Knowledge Distillation for Super Resolution Image Reconstruction

被引:2
|
作者
Yao, Gengqi [1 ]
Li, Zhan [1 ]
Bhanu, Bir [2 ]
Kang, Zhiqing [1 ]
Zhong, Ziyi [1 ]
Zhang, Qingfeng [1 ]
机构
[1] Jinan Univ, Dept Comp Sci, Guangzhou, Peoples R China
[2] Univ Calif Riverside, Dept Elect & Comp Engn, Riverside, CA USA
基金
中国国家自然科学基金;
关键词
CONVOLUTIONAL NETWORK; SUPERRESOLUTION;
D O I
10.1109/ICPR56361.2022.9956250
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the performance of single image super-resolution (SISR) methods based on deep neural networks has significantly improved. However, large model sizes and high computational costs are common problems for most SR networks. Meanwhile, a trade-off exists between higher reconstruction fidelity and improved perceptual quality in solving the SISR problem. In this paper, we propose a multi-teacher knowledge distillation approach for SR (MTKDSR) tasks that can train a balanced, lightweight, and efficient student network using different types of teacher models that are proficient in terms of reconstruction fidelity or perceptual quality. In addition, to generate more realistic and learnable textures, we propose an edge-guided SR network, EdgeSRN, as a perceptual teacher used in the MTKDSR framework. In our experiments, EdgeSRN was superior to the models based on adversarial learning in terms of the ability of effective knowledge transfer. Extensive experiments show that the student trained by MTKDSR exhibit superior performance compared to those of state-of-the-art lightweight SR networks in terms of perceptual quality with a smaller model size and fewer computations. Our code is available at https: //github. com/lizhangray/MTKDSR.
引用
收藏
页码:352 / 358
页数:7
相关论文
共 50 条
  • [21] TAKDSR: Teacher Assistant Knowledge Distillation Framework for Graphics Image Super-Resolution
    Yoon, Min
    Lee, Seunghyun
    Song, Byung Cheol
    IEEE ACCESS, 2023, 11 : 112015 - 112026
  • [22] CIMTD: Class Incremental Multi-Teacher Knowledge Distillation for Fractal Object Detection
    Wu, Chuhan
    Luo, Xiaochuan
    Huang, Haoran
    Zhang, Yulin
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XII, 2025, 15042 : 51 - 65
  • [23] DE-MKD: Decoupled Multi-Teacher Knowledge Distillation Based on Entropy
    Cheng, Xin
    Zhang, Zhiqiang
    Weng, Wei
    Yu, Wenxin
    Zhou, Jinjia
    MATHEMATICS, 2024, 12 (11)
  • [24] MULTI-TEACHER DISTILLATION FOR INCREMENTAL OBJECT DETECTION
    Jiang, Le
    Cheng, Hongqiang
    Ye, Xiaozhou
    Ouyang, Ye
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5520 - 5524
  • [25] Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector
    Shang, Ronghua
    Li, Wenzheng
    Zhu, Songling
    Jiao, Licheng
    Li, Yangyang
    NEURAL NETWORKS, 2023, 164 : 345 - 356
  • [26] Device adaptation free-KDA based on multi-teacher knowledge distillation
    Yang, Yafang
    Guo, Bin
    Liang, Yunji
    Zhao, Kaixing
    Yu, Zhiwen
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (10) : 3603 - 3615
  • [27] Image Super-Resolution Using Knowledge Distillation
    Gao, Qinquan
    Zhao, Yan
    Li, Gen
    Tong, Tong
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 527 - 541
  • [28] Multi-teacher knowledge distillation for compressed video action recognition based on deep learning
    Wu, Meng-Chieh
    Chiu, Ching-Te
    JOURNAL OF SYSTEMS ARCHITECTURE, 2020, 103
  • [29] Bi-Level Orthogonal Multi-Teacher Distillation
    Gong, Shuyue
    Wen, Weigang
    ELECTRONICS, 2024, 13 (16)
  • [30] MULTI-TEACHER KNOWLEDGE DISTILLATION FOR COMPRESSED VIDEO ACTION RECOGNITION ON DEEP NEURAL NETWORKS
    Wu, Meng-Chieh
    Chiu, Ching-Te
    Wu, Kun-Hsuan
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2202 - 2206