Cross-Modal Consistency for Single-Modal MR Image Segmentation

被引:1
|
作者
Xu, Wenxuan [1 ]
Li, Cangxin [1 ]
Bian, Yun [3 ]
Meng, Qingquan [1 ]
Zhu, Weifang [1 ]
Shi, Fei [1 ]
Chen, Xinjian [1 ]
Shao, Chengwei [2 ]
Xiang, Dehui [1 ]
机构
[1] Soochow Univ, Sch Elect & Informat Engn, Suzhou 215006, Peoples R China
[2] Navy Mil Med Univ, Changhai Hosp, Dept Radiol, Shanghai, Peoples R China
[3] Navy Mil Med Univ, Changhai Hosp, Dept Radiol, Shanghai 200433, Peoples R China
基金
国家重点研发计划;
关键词
Image segmentation; Pancreas; Imaging; Computed tomography; Training; Feature extraction; Loss measurement; Consistency learning; contrast alignment; single-modal MR Image segmentation; PANCREAS SEGMENTATION; NETWORK;
D O I
10.1109/TBME.2024.3380058
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective: Multi-modal magnetic resonance (MR) image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to obtain multiple modalities for a single patient in clinical applications. To address these issues, a cross-modal consistency framework is proposed for a single-modal MR image segmentation. Methods: To enable single-modal MR image segmentation in the inference stage, a weighted cross-entropy loss and a pixel-level feature consistency loss are proposed to train the target network with the guidance of the teacher network and the auxiliary network. To fuse dual-modal MR images in the training stage, the cross-modal consistency is measured according to Dice similarity entropy loss and Dice similarity contrastive loss, so as to maximize the prediction similarity of the teacher network and the auxiliary network. To reduce the difference in image contrast between different MR images for the same organs, a contrast alignment network is proposed to align input images with different contrasts to reference images with a good contrast. Results: Comprehensive experiments have been performed on a publicly available prostate dataset and an in-house pancreas dataset to verify the effectiveness of the proposed method. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. Conclusion: The proposed image segmentation method can fuse dual-modal MR images in the training stage and only need one-modal MR images in the inference stage. Significance: The proposed method can be used in routine clinical occasions when only single-modal MR image with variable contrast is available for a patient.
引用
收藏
页码:2557 / 2567
页数:11
相关论文
共 50 条
  • [21] Area-keywords cross-modal alignment for referring image segmentation
    Zhang, Huiyong
    Wang, Lichun
    Li, Shuang
    Xu, Kai
    Yin, Baocai
    NEUROCOMPUTING, 2024, 581
  • [22] Cross-modal attention guided visual reasoning for referring image segmentation
    Zhang, Wenjing
    Hu, Mengnan
    Tan, Quange
    Zhou, Qianli
    Wang, Rong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (19) : 28853 - 28872
  • [23] CMIRNet: Cross-Modal Interactive Reasoning Network for Referring Image Segmentation
    Xu, Mingzhu
    Xiao, Tianxiang
    Liu, Yutong
    Tang, Haoyu
    Hu, Yupeng
    Nie, Liqiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (04) : 3234 - 3249
  • [24] WITHIN-MODAL AND CROSS-MODAL CONSISTENCY IN THE DIRECTION AND MAGNITUDE OF PERCEPTUAL ASYMMETRY
    WEXLER, BE
    KING, GP
    NEUROPSYCHOLOGIA, 1990, 28 (01) : 71 - 80
  • [25] Intramodal consistency in triplet-based cross-modal learning for image retrieval
    Mallea, Mario
    Nanculef, Ricardo
    Araya, Mauricio
    MACHINE LEARNING, 2025, 114 (04)
  • [26] Improving Image-Text Matching With Bidirectional Consistency of Cross-Modal Alignment
    Li, Zhe
    Zhang, Lei
    Zhang, Kun
    Zhang, Yongdong
    Mao, Zhendong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6590 - 6607
  • [27] SiamSegNet: A multimodal Segmentation Method Based on Cross-modal Generation for Medical Image Segmentation
    Ma, Shiqiang
    Guo, Fei
    Tanga, Jijun
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 451 - 466
  • [28] Object Segmentation by Mining Cross-Modal Semantics
    Wu, Zongwei
    Wang, Jingjing
    Zhou, Zhuyun
    An, Zhaochong
    Jiang, Qiuping
    Demonceaux, Cedric
    Sun, Guolei
    Timofte, Radu
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3455 - 3464
  • [29] Cross-Modal Progressive Comprehension for Referring Segmentation
    Liu, Si
    Hui, Tianrui
    Huang, Shaofei
    Wei, Yunchao
    Li, Bo
    Li, Guanbin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 4761 - 4775
  • [30] Discrete online cross-modal hashing with consistency preservation
    Kang, Xiao
    Liu, Xingbo
    Xue, Wen
    Zhang, Xuening
    Nie, Xiushan
    Yin, Yilong
    PATTERN RECOGNITION, 2024, 155