Multi-modal Segmentation with Missing MR Sequences Using Pre-trained Fusion Networks

被引:6
|
作者
van Garderen, Karin [1 ,3 ]
Smits, Marion [1 ]
Klein, Stefan [1 ,2 ]
机构
[1] Erasmus MC, Dept Radiol & Nucl Med, Rotterdam, Netherlands
[2] Erasmus MC, Dept Med Informat, Rotterdam, Netherlands
[3] Med Delta, Delft, Netherlands
关键词
Convolutional neural network; Glioma segmentation; Missing data;
D O I
10.1007/978-3-030-33391-1_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Missing data is a common problem in machine learning and in retrospective imaging research it is often encountered in the form of missing imaging modalities. We propose to take into account missing modalities in the design and training of neural networks, to ensure that they are capable of providing the best possible prediction even when multiple images are not available. The proposed network combines three modifications to the standard 3D UNet architecture: a training scheme with dropout of modalities, a multi-pathway architecture with fusion layer in the final stage, and the separate pre-training of these pathways. These modifications are evaluated incrementally in terms of performance on full and missing data, using the BraTS multi-modal segmentation challenge. The final model shows significant improvement with respect to the state of the art on missing data and requires less memory during training.
引用
收藏
页码:165 / 172
页数:8
相关论文
共 50 条
  • [1] Cyberbullying detection on multi-modal data using pre-trained deep learning architectures
    Pericherla, Subbaraju
    Ilavarasan, E.
    [J]. INGENIERIA SOLIDARIA, 2021, 17 (03):
  • [2] Probing Multi-modal Machine Translation with Pre-trained Language Model
    Kong, Yawei
    Fan, Kai
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3689 - 3699
  • [3] Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition
    Koo, Junghyun
    Lee, Jie Hwan
    Pyo, Jaewoo
    Jo, Yujin
    Lee, Kyogu
    [J]. INTERSPEECH 2020, 2020, : 2217 - 2221
  • [4] PMMN: Pre-trained multi-Modal network for scene text recognition
    Zhang, Yu
    Fu, Zilong
    Huang, Fuyu
    Liu, Yizhi
    [J]. PATTERN RECOGNITION LETTERS, 2021, 151 : 103 - 111
  • [5] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Xiao Wang
    Guangyao Chen
    Guangwu Qian
    Pengcheng Gao
    Xiao-Yong Wei
    Yaowei Wang
    Yonghong Tian
    Wen Gao
    [J]. Machine Intelligence Research, 2023, 20 : 447 - 482
  • [6] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Wang, Xiao
    Chen, Guangyao
    Qian, Guangwu
    Gao, Pengcheng
    Wei, Xiao-Yong
    Wang, Yaowei
    Tian, Yonghong
    Gao, Wen
    [J]. MACHINE INTELLIGENCE RESEARCH, 2023, 20 (04) : 447 - 482
  • [7] Leveraging pre-trained Segmentation Networks for Anomaly Segmentation
    Rippel, Oliver
    Merhof, Dorit
    [J]. 2021 26TH IEEE INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2021,
  • [8] MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
    Bellagente, Marco
    Brack, Manuel
    Teufel, Hannah
    Friedrich, Felix
    Deiseroth, Bjoern
    Eichenberg, Constantin
    Dai, Andrew
    Baldock, Robert J. N.
    Nanda, Souradeep
    Oostermeijer, Koen
    Cruz-Salinas, Andres Felipe
    Schramowski, Patrick
    Kersting, Kristian
    Weinbach, Samuel
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Multi-modal Sentiment Analysis of Mongolian Language based on Pre-trained Models and High-resolution Networks
    Yang, Yang
    Ren, Qing-Dao-Er-Ji
    He, Rui-Feng
    [J]. 2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 291 - 296
  • [10] Semantic Segmentation of Mammograms Using Pre-Trained Deep Neural Networks
    Prates, Rodrigo Leite
    Gomez-Flores, Wilfrido
    Pereira, Wagner
    [J]. 2021 18TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, COMPUTING SCIENCE AND AUTOMATIC CONTROL (CCE 2021), 2021,