Discrepancy and Gradient-Guided Multi-modal Knowledge Distillation for Pathological Glioma Grading

被引:15
|
作者
Xing, Xiaohan [1 ]
Chen, Zhen [1 ]
Zhu, Meilu [1 ]
Hou, Yuenan [2 ]
Gao, Zhifan [3 ]
Yuan, Yixuan [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Kowloon, Hong Kong, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
[3] Sun Yat Sen Univ, Sch Biomed Engn, Guangzhou, Guangdong, Peoples R China
关键词
Knowledge distillation; Missing modality; Glioma grading;
D O I
10.1007/978-3-031-16443-9_61
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The fusion of multi-modal data, e.g., pathology slides and genomic profiles, can provide complementary information and benefit glioma grading. However, genomic profiles are difficult to obtain due to the high costs and technical challenges, thus limiting the clinical applications of multi-modal diagnosis. In this work, we address the clinically relevant problem where paired pathology-genomic data are available during training, while only pathology slides are accessible for inference. To improve the performance of pathological grading models, we present a discrepancy and gradient-guided distillation framework to transfer the privileged knowledge from the multi-modal teacher to the pathology student. For the teacher side, to prepare useful knowledge, we propose a Discrepancy-induced Contrastive Distillation (DC-Distill) module that explores reliable contrastive samples with teacher-student discrepancy to regulate the feature distribution of the student. For the student side, as the teacher may include incorrect information, we propose a Gradient-guided Knowledge Refinement (GK-Refine) module that builds a knowledge bank and adaptively absorbs the reliable knowledge according to their agreement in the gradient space. Experiments on the TCGA GBM-LGG dataset show that our proposed distillation framework improves the pathological glioma grading significantly and outperforms other KD methods. Notably, with the sole pathology slides, our method achieves comparable performance with existing multi-modal methods. The code is available at https://github.com/CityU-AIM-Group/MultiModal-learning.
引用
收藏
页码:636 / 646
页数:11
相关论文
共 50 条
  • [1] A discrepancy-aware self-distillation method for multi-modal glioma grading
    Li, Jiayi
    Zhang, Lei
    Zhong, Ke
    Qian, Guangwu
    KNOWLEDGE-BASED SYSTEMS, 2024, 295
  • [2] Comprehensive learning and adaptive teaching: Distilling multi-modal knowledge for pathological glioma grading
    Xing, Xiaohan
    Zhu, Meilu
    Chen, Zhen
    Yuan, Yixuan
    Medical Image Analysis, 2024, 91
  • [3] Comprehensive learning and adaptive teaching: Distilling multi-modal knowledge for pathological glioma grading
    Xing, Xiaohan
    Zhu, Meilu
    Chen, Zhen
    Yuan, Yixuan
    MEDICAL IMAGE ANALYSIS, 2024, 91
  • [4] Gradient-Guided Multi-Modal Image Reconstruction for Electrical Impedance Tomography
    Liu, Zhe
    Dong, Huazhi
    Wang, Jiazheng
    Chen, Zhou
    Zhou, Wei
    Yang, Yunjie
    2023 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC, 2023,
  • [5] Unpaired Multi-Modal Segmentation via Knowledge Distillation
    Dou, Qi
    Liu, Quande
    Heng, Pheng Ann
    Glocker, Ben
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) : 2415 - 2425
  • [6] Gradient modulated contrastive distillation of low-rank multi-modal knowledge for disease diagnosis
    Xing, Xiaohan
    Chen, Zhen
    Hou, Yuenan
    Yuan, Yixuan
    MEDICAL IMAGE ANALYSIS, 2023, 88
  • [7] CROSS-MODAL KNOWLEDGE DISTILLATION IN MULTI-MODAL FAKE NEWS DETECTION
    Wei, Zimian
    Pan, Hengyue
    Qiao, Linbo
    Niu, Xin
    Dong, Peijie
    Li, Dongsheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4733 - 4737
  • [8] Intraoperative Glioma Grading Using Neural Architecture Search and Multi-Modal Imaging
    Xiao, Anqi
    Shen, Biluo
    Shi, Xiaojing
    Zhang, Zhe
    Zhang, Zeyu
    Tian, Jie
    Ji, Nan
    Hu, Zhenhua
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (10) : 2570 - 2581
  • [9] Learnable Cross-modal Knowledge Distillation for Multi-modal Learning with Missing Modality
    Wang, Hu
    Ma, Congbo
    Zhang, Jianpeng
    Zhang, Yuan
    Avery, Jodie
    Hull, Louise
    Carneiro, Gustavo
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IV, 2023, 14223 : 216 - 226
  • [10] Multi-Modal Knowledge Distillation for Domain-Adaptive Action Recognition
    Zhu, Xiaoyu
    Liu, Wenhe
    de Melo, Celso M.
    Hauptmann, Alexander
    SYNTHETIC DATA FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: TOOLS, TECHNIQUES, AND APPLICATIONS II, 2024, 13035