Multi-modal brain tumor segmentation via disentangled representation learning and region-aware contrastive learning

被引:7
|
作者
Zhou, Tongxue [1 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; Multi-modal feature fusion; Disentangled representation learning; Contrastive learning; SURVIVAL; NET;
D O I
10.1016/j.patcog.2024.110282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain tumors are threatening the life and health of people in the world. Automatic brain tumor segmentation using multiple MR images is challenging in medical image analysis. It is known that accurate segmentation relies on effective feature learning. Existing methods address the multi -modal MR brain tumor segmentation by explicitly learning a shared feature representation. However, these methods fail to capture the relationship between MR modalities and the feature correlation between different target tumor regions. In this paper, I propose a multi -modal brain tumor segmentation network via disentangled representation learning and region -aware contrastive learning. Specifically, a feature fusion module is first designed to learn the valuable multi -modal feature representation. Subsequently, a novel disentangled representation learning is proposed to decouple the fused feature representation into multiple factors corresponding to the target tumor regions. Furthermore, contrastive learning is presented to help the network extract tumor region -related feature representations. Finally, the segmentation results are obtained using the segmentation decoders. Quantitative and qualitative experiments conducted on the public datasets, BraTS 2018 and BraTS 2019, justify the importance of the proposed strategies, and the proposed approach can achieve better performance than other state-of-the-art approaches. In addition, the proposed strategies can be extended to other deep neural networks.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Segmentation of Multi-Modal MRI Brain Tumor Sub-Regions Using Deep Learning
    Srinivas, B.
    Rao, Gottapu Sasibhushana
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2020, 15 (04) : 1899 - 1909
  • [32] FMCS: Improving Code Search by Multi-Modal Representation Fusion and Momentum Contrastive Learning
    Liu, Wenjie
    Chen, Gong
    Xie, Xiaoyuan
    2024 IEEE 24TH INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2024, : 632 - 638
  • [33] Modality-Aware Mutual Learning for Multi-modal Medical Image Segmentation
    Zhang, Yao
    Yang, Jiawei
    Tian, Jiang
    Shi, Zhongchao
    Zhong, Cheng
    Zhang, Yang
    He, Zhiqiang
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT I, 2021, 12901 : 589 - 599
  • [34] Hierarchical Augmentation and Region-Aware Contrastive Learning for Semi-Supervised Semantic Segmentation of Remote Sensing Images
    Luo, Yuan
    Sun, Bin
    Li, Shutao
    Hu, Yulong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
  • [35] Mineral: Multi-modal Network Representation Learning
    Kefato, Zekarias T.
    Sheikh, Nasrullah
    Montresor, Alberto
    MACHINE LEARNING, OPTIMIZATION, AND BIG DATA, MOD 2017, 2018, 10710 : 286 - 298
  • [36] Scalable multi-modal representation learning networks
    Zihan Fang
    Ying Zou
    Shiyang Lan
    Shide Du
    Yanchao Tan
    Shiping Wang
    Artificial Intelligence Review, 58 (7)
  • [37] Towards Multi-modal Anatomical Landmark Detection for Ultrasound-Guided Brain Tumor Resection with Contrastive Learning
    Salari, Soorena
    Rasoulian, Amirhossein
    Rivaz, Hassan
    Xiao, Yiming
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IX, 2023, 14228 : 668 - 678
  • [38] RecFormer: Recurrent Multi-modal Transformer with History-Aware Contrastive Learning for Visual Dialog
    Lu, Liucun
    Qin, Jinghui
    Jie, Zequn
    Ma, Lin
    Lin, Liang
    Liang, Xiaodan
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 159 - 171
  • [39] NaMa: Neighbor-Aware Multi-Modal Adaptive Learning for Prostate Tumor Segmentation on Anisotropic MR Images
    Meng, Runqi
    Zhang, Xiao
    Huang, Shijie
    Gu, Yuning
    Liu, Guiqin
    Wu, Guangyu
    Wang, Nizhuan
    Sun, Kaicong
    Shen, Dinggang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4198 - 4206
  • [40] Learning Multi-Modal Scale-Aware Attentions for Efficient and Robust Road Segmentation
    Zhou, Yunjiao
    Yang, Jianfei
    Cao, Haozhi
    Zeng, Zhaoyang
    Zou, Han
    Xie, Lihua
    UNMANNED SYSTEMS, 2024, 12 (02) : 201 - 213