Mutual Information-Based Graph Co-Attention Networks for Multimodal Prior-Guided Magnetic Resonance Imaging Segmentation

被引:13
|
作者
Mo, Shaocong [1 ]
Cai, Ming [1 ]
Lin, Lanfen [1 ]
Tong, Ruofeng [1 ,2 ]
Chen, Qingqing [3 ]
Wang, Fang [3 ]
Hu, Hongjie [3 ]
Iwamoto, Yutaro [4 ]
Han, Xian-Hua [5 ]
Chen, Yen-Wei [1 ,2 ,4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Res Ctr Healthcare Data Sci, Zhejiang Lab, Hangzhou 311121, Peoples R China
[3] Zhejiang Univ, Sir Run Run Shaw Hosp, Dept Radiol, Sch Med, Hangzhou 310016, Peoples R China
[4] Ritsumeikan Univ, Coll Informat Sci & Engn, Kusatsu 5258577, Japan
[5] Yamaguchi Univ, Artificial Intelligence Res Ctr, Yamaguchi 7538511, Japan
基金
中国国家自然科学基金;
关键词
Magnetic resonance imaging; Feature extraction; Lesions; Image segmentation; Liver; Fuses; Mutual information; Multimodal segmentation; graph attention; graph mutual information; MRI;
D O I
10.1109/TCSVT.2021.3112551
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multimodal magnetic resonance imaging (MRI) provides complementary information about targets, and the segmentation of multimodal MRI is widely used as an essential preprocessing step for initial diagnosis, stage differentiation, and post-treatment efficacy evaluation in clinical situations. For the main modality or each of the modalities, it is important to enhance the visual information by modeling the connection and effectively fusing the features among them. However, the existing methods for multimodal segmentation have a drawback; they coincidentally drop information of individual modality during the fusion process. Recently, graph learning-based methods have been applied in segmentation, and these methods have achieved considerable improvements by modeling the relationships across feature regions and reasoning using global information. In this paper, we propose a graph learning-based approach to efficiently extract modality-specific features and establish regional correspondence effectively among all modalities. In detail, after projecting features into a graph domain and employing graph convolution to propagate information across all regions for learning global modality-specific features, we propose a mutual information-based graph co-attention module to learn the weight coefficients of one bipartite graph constructed by the fully connected graphs having different modalities in the graph domain and by selectively fusing the node features. Based on the deformation diagram between the spatial-graph space and our proposed graph co-attention module, we present a multimodal prior-guided segmentation framework, which uses two strategies for two clinical situations: Modality-Specific Learning Strategy and Co-Modality Learning Strategy. Besides, the improved Co-Modality Learning Strategy is used with trainable weights in the multi-task loss for the optimization of the proposed framework. We validated our proposed modules and frameworks on two multimodal MRI datasets: our private liver lesion dataset and a public prostate zone dataset. Our experimental results on both datasets prove the superiority of our proposed approaches.
引用
收藏
页码:2512 / 2526
页数:15
相关论文
共 38 条
  • [1] Multimodal matching-aware co-attention networks with mutual knowledge distillation for fake news detection
    Hu, Linmei
    Zhao, Ziwang
    Qi, Weijian
    Song, Xuemeng
    Nie, Liqiang
    INFORMATION SCIENCES, 2024, 664
  • [2] Segmentation of magnetic resonance brain imaging based on graph theory
    Razavi S.E.
    Khodadadi H.
    Razavi, S. Ehsan (e_razavi_control@yahoo.com), 1600, Mashhad University of Medical Sciences (17): : 48 - 57
  • [3] SceneGATE: Scene-Graph Based Co-Attention Networks for Text Visual Question Answering
    Cao, Feiqi
    Luo, Siwen
    Nunez, Felipe
    Wen, Zean
    Poon, Josiah
    Han, Soyeon Caren
    ROBOTICS, 2023, 12 (04)
  • [4] A Prior Knowledge-Guided, Deep Learning-Based Semiautomatic Segmentation for Complex Anatomy on Magnetic Resonance Imaging
    Zhang, Ying
    Liang, Ying
    Ding, Jie
    Amjad, Asma
    Paulson, Eric
    Ahunbay, Ergun
    Hall, William A.
    Erickson, Beth
    Li, X. Allen
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2022, 114 (02): : 349 - 359
  • [5] Gastrointestinal Tumor Segmentation Method Based on Deep Learning and Multimodal Magnetic Resonance Imaging
    Chu, Zheng
    Zhang, Z.
    INDIAN JOURNAL OF PHARMACEUTICAL SCIENCES, 2021, 83 : 17 - 18
  • [6] Prior knowledge based deep learning auto-segmentation in magnetic resonance imaging-guided radiotherapy of prostate cancer
    Kawula, Maria
    Vagni, Marica
    Cusumano, Davide
    Boldrini, Luca
    Placidi, Lorenzo
    Corradini, Stefanie
    Belka, Claus
    Landry, Guillaume
    Kurz, Christopher
    PHYSICS & IMAGING IN RADIATION ONCOLOGY, 2023, 28
  • [7] Attention-guided residual W-Net for supervised cardiac magnetic resonance imaging segmentation
    Singh, Kamal Raj
    Sharma, Ambalika
    Singh, Girish Kumar
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [8] Graph Laplacian-based Tumor Segmentation and Denoising in Brain Magnetic Resonance Imaging
    Hanif, Adnan
    Doroslovacki, Milos
    2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 241 - 245
  • [9] Nested Dilation Networks for Brain Tumor Segmentation Based on Magnetic Resonance Imaging
    Wang, Liansheng
    Wang, Shuxin
    Chen, Rongzhen
    Qu, Xiaobo
    Chen, Yiping
    Huang, Shaohui
    Liu, Changhua
    FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [10] Leveraging segmentation-guided spatial feature embedding for overall survival prediction in glioblastoma with multimodal magnetic resonance imaging
    Kwon, Junmo
    Kim, Jonghun
    Park, Hyunjin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 255