A Comprehensive Multi-modal Domain Adaptative Aid Framework for Brain Tumor Diagnosis

被引:0
|
作者
Chu, Wenxiu [1 ]
Zhou, Yudan [1 ]
Cai, Shuhui [1 ,2 ]
Chen, Zhong [1 ,2 ]
Cai, Congbo [1 ,2 ]
机构
[1] Xiamen Univ, Inst Artificial Intelligence, Xiamen, Peoples R China
[2] Xiamen Univ, Dept Elect Sci, Xiamen, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Unsupervised Domain Adaptation; IDH Mutation; Ki67; Genotype; Brain Tumor Segmentation; Grading; Glioma Subtype; SEGMENTATION;
D O I
10.1007/978-981-99-8558-6_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate segmentation and grading of brain tumors from multi-modal magnetic resonance imaging (MRI) play a vital role in the diagnosis and treatment of brain tumors. The gene expression in glioma also influences the selection of treatment strategies and assessment of patient survival, such as the gene mutation status of isocitrate dehydrogenase (IDH), the co-deletion status of 1p/19q, and the value of Ki67. However, obtaining medical image annotations is both time-consuming and expensive, and it is challenging to perform tasks such as brain tumor segmentation, grading, and genotype prediction directly using label-deprived multi-modal MRI. We proposed a comprehensive multi-modal domain adaptative aid (CMDA) framework building on hospital datasets from multiple centers to address this issue, which can effectively relieve distributional differences between labeled source datasets and unlabeled target datasets. Specifically, a comprehensive diagnostic module is proposed to simultaneously accomplish the tasks of brain tumor segmentation, grading, genotyping, and glioma subtype classification. Furthermore, to learn the data distribution between labeled public datasets and unlabeled local hospital datasets, we consider the semantic segmentation results as the output capturing the similarity between different data sources, and we employ adversarial learning to facilitate the network in learning domain knowledge. Experimental results showthat our end-to-endCMDAframework outperforms other methods based on direct transfer learning and other state-of-the-art unsupervised methods.
引用
收藏
页码:382 / 394
页数:13
相关论文
共 50 条
  • [41] A unified framework for multi-modal federated learning
    Xiong, Baochen
    Yang, Xiaoshan
    Qi, Fan
    Xu, Changsheng
    NEUROCOMPUTING, 2022, 480 : 110 - 118
  • [42] A privacy-preserving framework with multi-modal data for cross-domain recommendation
    Wang, Li
    Sang, Lei
    Zhang, Quangui
    Wu, Qiang
    Xu, Min
    Knowledge-Based Systems, 2024, 304
  • [43] Automated brain tumor segmentation on multi-modal MR image using SegNet
    Salma Alqazzaz
    Xianfang Sun
    Xin Yang
    Len Nokes
    Computational Visual Media, 2019, 5 (02) : 209 - 219
  • [44] Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss
    Jiang, Yang
    Zhang, Shuang
    Chi, Jianning
    JOURNAL OF DIGITAL IMAGING, 2023, 36 (04) : 1794 - 1807
  • [45] COMETE: An Innovative Multi-Modal Program for Pediatric Brain Tumor Survivors and Their Family
    Pouchepadass, C.
    Lopez, C.
    Karsenti, L.
    Longaud, A.
    Da-Fonseca, C.
    Flahault, C.
    PEDIATRIC BLOOD & CANCER, 2019, 66 : S475 - S475
  • [46] FARMI: A FrAmework for Recording Multi-Modal Interactions
    Jonell, Patrik
    Bystedt, Mattias
    Fallgren, Per
    Kontogiorgos, Dimosthenis
    Lopes, Jose
    Malisz, Zofia
    Mascarenhas, Samuel
    Oertel, Catharine
    Raveh, Eran
    Shore, Todd
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 3969 - 3974
  • [47] Multi-Modal Interactions of Mixed Reality Framework
    Omary, Danah
    Mehta, Gayatri
    17TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE, DCAS 2024, 2024,
  • [48] Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss
    Yang Jiang
    Shuang Zhang
    Jianning Chi
    Journal of Digital Imaging, 2023, 36 : 1794 - 1807
  • [49] A Framework of Multi-modal Corpus for Mandarin Learning
    Liu, Yang
    Yang, Chunting
    2009 IITA INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS ENGINEERING, PROCEEDINGS, 2009, : 476 - 479
  • [50] Comprehensive Semi-Supervised Multi-Modal Learning
    Yang, Yang
    Wang, Ke-Tao
    Zhan, De-Chuan
    Xiong, Hui
    Jiang, Yuan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4092 - 4098