Multi-modal brain tumor segmentation via disentangled representation learning and region-aware contrastive learning

被引:7
|
作者
Zhou, Tongxue [1 ]
机构
[1] Hangzhou Normal Univ, Sch Informat Sci & Technol, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation; Multi-modal feature fusion; Disentangled representation learning; Contrastive learning; SURVIVAL; NET;
D O I
10.1016/j.patcog.2024.110282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain tumors are threatening the life and health of people in the world. Automatic brain tumor segmentation using multiple MR images is challenging in medical image analysis. It is known that accurate segmentation relies on effective feature learning. Existing methods address the multi -modal MR brain tumor segmentation by explicitly learning a shared feature representation. However, these methods fail to capture the relationship between MR modalities and the feature correlation between different target tumor regions. In this paper, I propose a multi -modal brain tumor segmentation network via disentangled representation learning and region -aware contrastive learning. Specifically, a feature fusion module is first designed to learn the valuable multi -modal feature representation. Subsequently, a novel disentangled representation learning is proposed to decouple the fused feature representation into multiple factors corresponding to the target tumor regions. Furthermore, contrastive learning is presented to help the network extract tumor region -related feature representations. Finally, the segmentation results are obtained using the segmentation decoders. Quantitative and qualitative experiments conducted on the public datasets, BraTS 2018 and BraTS 2019, justify the importance of the proposed strategies, and the proposed approach can achieve better performance than other state-of-the-art approaches. In addition, the proposed strategies can be extended to other deep neural networks.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Region-aware Contrastive Learning for Semantic Segmentation
    Hu, Hanzhe
    Cui, Jinshi
    Wang, Liwei
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16271 - 16281
  • [2] RFNet: Region-aware Fusion Network for Incomplete Multi-modal Brain Tumor Segmentation
    Ding, Yuhang
    Yu, Xin
    Yang, Yi
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3955 - 3964
  • [3] Deep contrastive representation learning for multi-modal clustering
    Lu, Yang
    Li, Qin
    Zhang, Xiangdong
    Gao, Quanxue
    NEUROCOMPUTING, 2024, 581
  • [4] Contrastive Multi-Modal Knowledge Graph Representation Learning
    Fang, Quan
    Zhang, Xiaowei
    Hu, Jun
    Wu, Xian
    Xu, Changsheng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8983 - 8996
  • [5] Efficient disentangled representation learning for multi-modal finger biometrics
    Yang, Weili
    Huang, Junduan
    Luo, Dacan
    Kang, Wenxiong
    PATTERN RECOGNITION, 2024, 145
  • [6] SSDMM-VAE: variational multi-modal disentangled representation learning
    Arnab Kumar Mondal
    Ajay Sailopal
    Parag Singla
    Prathosh AP
    Applied Intelligence, 2023, 53 : 8467 - 8481
  • [7] SSDMM-VAE: variational multi-modal disentangled representation learning
    Mondal, Arnab Kumar
    Sailopal, Ajay
    Singla, Parag
    Ap, Prathosh
    APPLIED INTELLIGENCE, 2023, 53 (07) : 8467 - 8481
  • [8] Multi-modal hypergraph contrastive learning for medical image segmentation
    Jing, Weipeng
    Wang, Junze
    Di, Donglin
    Li, Dandan
    Song, Yang
    Fan, Lei
    PATTERN RECOGNITION, 2025, 165
  • [9] Graph Embedding Contrastive Multi-Modal Representation Learning for Clustering
    Xia, Wei
    Wang, Tianxiu
    Gao, Quanxue
    Yang, Ming
    Gao, Xinbo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1170 - 1183
  • [10] CLMTR: a generic framework for contrastive multi-modal trajectory representation learning
    Liang, Anqi
    Yao, Bin
    Xie, Jiong
    Zheng, Wenli
    Shen, Yanyan
    Ge, Qiqi
    GEOINFORMATICA, 2024, : 233 - 253