Multi-modal brain tumor segmentation via disentangled representation learning and region-aware contrastive learning

被引:7
|
作者
Zhou, Tongxue [1 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; Multi-modal feature fusion; Disentangled representation learning; Contrastive learning; SURVIVAL; NET;
D O I
10.1016/j.patcog.2024.110282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain tumors are threatening the life and health of people in the world. Automatic brain tumor segmentation using multiple MR images is challenging in medical image analysis. It is known that accurate segmentation relies on effective feature learning. Existing methods address the multi -modal MR brain tumor segmentation by explicitly learning a shared feature representation. However, these methods fail to capture the relationship between MR modalities and the feature correlation between different target tumor regions. In this paper, I propose a multi -modal brain tumor segmentation network via disentangled representation learning and region -aware contrastive learning. Specifically, a feature fusion module is first designed to learn the valuable multi -modal feature representation. Subsequently, a novel disentangled representation learning is proposed to decouple the fused feature representation into multiple factors corresponding to the target tumor regions. Furthermore, contrastive learning is presented to help the network extract tumor region -related feature representations. Finally, the segmentation results are obtained using the segmentation decoders. Quantitative and qualitative experiments conducted on the public datasets, BraTS 2018 and BraTS 2019, justify the importance of the proposed strategies, and the proposed approach can achieve better performance than other state-of-the-art approaches. In addition, the proposed strategies can be extended to other deep neural networks.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Improving Code Search with Multi-Modal Momentum Contrastive Learning
    Shi, Zejian
    Xiong, Yun
    Zhang, Yao
    Jiang, Zhijie
    Zhao, Jinjing
    Wang, Lei
    Li, Shanshan
    2023 IEEE/ACM 31ST INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION, ICPC, 2023, : 280 - 291
  • [42] Improving Medical Multi-modal Contrastive Learning with Expert Annotations
    Kumar, Yogesh
    Marttinen, Pekka
    COMPUTER VISION - ECCV 2024, PT XX, 2025, 15078 : 468 - 486
  • [43] CrossMoCo: Multi-modal Momentum Contrastive Learning for Point Cloud
    Paul, Sneha
    Patterson, Zachary
    Bouguila, Nizar
    2023 20TH CONFERENCE ON ROBOTS AND VISION, CRV, 2023, : 273 - 280
  • [44] Multi-modal contrastive learning of subcellular organization using DICE
    Nasser, Rami
    Schaffer, Leah, V
    Ideker, Trey
    Sharan, Roded
    BIOINFORMATICS, 2024, 40 : ii105 - ii110
  • [45] Collaborative denoised graph contrastive learning for multi-modal recommendation
    Xu, Fuyong
    Zhu, Zhenfang
    Fu, Yixin
    Wang, Ru
    Liu, Peiyu
    INFORMATION SCIENCES, 2024, 679
  • [46] Partial Multi-Modal Hashing via Neighbor-Aware Completion Learning
    Tan, Wentao
    Zhu, Lei
    Li, Jingjing
    Zhang, Zheng
    Zhang, Huaxiang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8499 - 8510
  • [47] Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning
    Liu, Zecheng
    Wei, Jia
    Li, Rui
    Zhou, Jianlong
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 159
  • [48] Multi-modal tumor segmentation methods based on deep learning: a narrative review
    Xue, Hengzhi
    Yao, Yudong
    Teng, Yueyang
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (01) : 1122 - 1140
  • [49] Region-Aware Hierarchical Graph Contrastive Learning for Ride-Hailing Driver Profiling
    Chen, Kehua
    Han, Jindong
    Feng, Siyuan
    Zhu, Meixin
    Yang, Hai
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2023, 156
  • [50] CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
    Zolfaghari, Mohammadreza
    Zhu, Yi
    Gehler, Peter
    Brox, Thomas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1430 - 1439