Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation

被引:0
|
作者
Konwer, Aishik [1 ]
Hu, Xiaoling [1 ]
Bae, Joseph [2 ]
Xu, Xuan [1 ]
Chen, Chao [2 ]
Prasanna, Prateek [2 ]
机构
[1] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
[2] SUNY Stony Brook, Dept Biomed Informat, Stony Brook, NY 11794 USA
关键词
D O I
10.1109/ICCV51070.2023.01958
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In medical vision, different imaging modalities provide complementary information. However, in practice, not all modalities may be available during inference or even training. Previous approaches, e.g., knowledge distillation or image synthesis, often assume the availability of full modalities for all subjects during training; this is unrealistic and impractical due to the variability in data collection across sites. We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training, even when only limited full modality samples are available. Meta-learning enhances partial modality representations to full modality representations by meta-training on partial modality data and meta-testing on limited full modality samples. Additionally, we co-supervise this feature enrichment by introducing an auxiliary adversarial learning branch. More specifically, a missing modality detector is used as a discriminator to mimic the full modality setting. Our segmentation framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.
引用
收藏
页码:21358 / 21368
页数:11
相关论文
共 50 条
  • [21] Personalized Meta-Learning for Domain Agnostic Learning from Demonstration
    Schrum, Mariah L.
    Hedlund-Botti, Erin
    Gombolay, Matthew C.
    [J]. PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 1179 - 1181
  • [22] Learning to Learn Better Unimodal Representations via Adaptive Multimodal Meta-Learning
    Sun, Ya
    Mai, Sijie
    Hu, Haifeng
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2209 - 2223
  • [23] Knowledge Distillation for Model-Agnostic Meta-Learning
    Zhang, Min
    Wang, Donglin
    Gai, Sibo
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1355 - 1362
  • [24] Conditional Meta-Learning of Linear Representations
    Denevi, Giulia
    Pontil, Massimiliano
    Ciliberto, Carlo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [25] Provable Meta-Learning of Linear Representations
    Tripuraneni, Nilesh
    Jin, Chi
    Jordan, Michael, I
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7447 - 7458
  • [26] Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation
    Vuorio, Risto
    Sun, Shao-Hua
    Hu, Hexiang
    Lim, Joseph J.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [27] Visual analysis of meteorological satellite data via model-agnostic meta-learning
    Shiyu Cheng
    Hanwei Shen
    Guihua Shan
    Beifang Niu
    Weihua Bai
    [J]. Journal of Visualization, 2021, 24 : 301 - 315
  • [28] Visual analysis of meteorological satellite data via model-agnostic meta-learning
    Cheng, Shiyu
    Shen, Hanwei
    Shan, Guihua
    Niu, Beifang
    Bai, Weihua
    [J]. JOURNAL OF VISUALIZATION, 2021, 24 (02) : 301 - 315
  • [29] Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning
    Volpi, Riccardo
    Larlus, Diane
    Rogez, Gregory
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4441 - 4451
  • [30] Asynchronous Multimodal Video Sequence Fusion via Learning Modality-Exclusive and -Agnostic Representations
    Yang, Dingkang
    Li, Mingcheng
    Qu, Linhao
    Yang, Kun
    Zhai, Peng
    Wang, Song
    Zhang, Lihua
    [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (12) : 12360 - 12375