C2MA-Net: Cross-Modal Cross-Attention Network for Acute Ischemic Stroke Lesion Segmentation Based on CT Perfusion Scans

被引:28
|
作者
Shi, Tianyu [1 ]
Jiang, Huiyan [2 ,3 ]
Zheng, Bin [4 ]
机构
[1] Northeastern Univ, Software Coll, Boston, MA 02115 USA
[2] Northeastern Univ, Software Coll, Shenyang 110819, Peoples R China
[3] Northeastern Univ, Minist Educ, Key Lab Intelligent Comp Biomed Image, Shenyang 110819, Peoples R China
[4] Univ Oklahoma, Sch Elect & Comp Engn, Norman, OK 73019 USA
基金
中国国家自然科学基金;
关键词
Artificial intelligence; Lesions; Image segmentation; Stroke (medical condition); Three-dimensional displays; Convolution; Computed tomography; Acute ischemic stroke (AIS); convolutional neural network (CNN); CT perfusion (CTP); lesion segmentation;
D O I
10.1109/TBME.2021.3087612
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective: Based on the hypothesis that adding a cross-modal and cross-attention (C(2)MA) mechanism into a deep learning network improves accuracy and efficacy of medical image segmentation, we propose to test a novel network to segment acute ischemic stroke (AIS) lesions from four CT perfusion (CTP) maps. Methods: The proposed network uses a C(2)MA module directly to establish a spatial-wise relationship by using the multigroup non-local attention operation between two modal features and performs dynamic group-wise recalibration through group attention block. This C(2)MA-Net has a multipath encoder-decoder architecture, in which each modality is processed in different streams on the encoding path, and the pair related parameter modalities are used to bridge attention across multimodal information through the C(2)MA module. A public dataset involving 94 training and 62 test cases are used to build and evaluate the C(2)MA-Net. AIS segmentation results on testing cases are analyzed and compared with other state-of-the-art models reported in the literature. Results: By calculating several average evaluation scores, C(2)MA-network improves Recall and F2 scores by 6% and 1%, respectively. In the ablation experiment, the F1 score of C(2)MA-Net is at least 7.8% higher than that of single-input single-modal self-attention networks. Conclusion: This study demonstrates advantages of applying C(2)MA-network to segment AIS lesions, which yields promising segmentation accuracy, and achieves semantic decoupling by processing different parameter modalities separately. Significance: Proving the potential of cross-modal interactions in attention to assist identifying new imaging biomarkers for more accurately predicting AIS prognosis in future studies.
引用
收藏
页码:108 / 118
页数:11
相关论文
共 14 条
  • [1] Ischemic Stroke Lesion Core Segmentation from CT Perfusion Scans Using Attention ResUnet Deep Learning
    Alirr, Omar Ibrahim
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025,
  • [2] UCATR: Based on CNN and Transformer Encoding and Cross-Attention Decoding for Lesion Segmentation of Acute Ischemic Stroke in Non-contrast Computed Tomography Images
    Luo, Chun
    Zhang, Jing
    Chen, Xinglin
    Tang, Yinhao
    Weng, Xiechuan
    Xu, Fan
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 3565 - 3568
  • [3] A VAN-Based Multi-Scale Cross-Attention Mechanism for Skin Lesion Segmentation Network
    Liu, Shuang
    Zhuang, Zeng
    Zheng, Yanfeng
    Kolmanic, Simon
    IEEE ACCESS, 2023, 11 : 81953 - 81964
  • [4] CMAF-Net: a cross-modal attention fusion-based deep neural network for incomplete multi-modal brain tumor segmentation
    Sun, Kangkang
    Ding, Jiangyi
    Li, Qixuan
    Chen, Wei
    Zhang, Heng
    Sun, Jiawei
    Jiao, Zhuqing
    Ni, Xinye
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (07) : 4579 - 4604
  • [5] Few-shot defect segmentation based on cross-modal attention aggregation and adaptive prototype generation network
    Liu, Shi-Tong
    Zhang, Yun-Zhou
    Shan, De-Xing
    Jin, Yang
    Ning, Jian
    Kongzhi yu Juece/Control and Decision, 2024, 39 (11): : 3655 - 3663
  • [6] C2Net: content-dependent and -independent cross-attention network for anomaly detection in videos
    Liang, Jiafei
    Xiao, Yang
    Zhou, Joey Tianyi
    Yang, Feng
    Li, Ting
    Fang, Zhiwen
    APPLIED INTELLIGENCE, 2024, 54 (02) : 1980 - 1996
  • [7] Cross-modal Audiovisual Separation Based on U-Net Network Combining Optical Flow Algorithm and Attention Mechanism
    Lan C.
    Jiang P.
    Chen H.
    Han C.
    Guo X.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (10): : 3538 - 3546
  • [8] Hybrid CNN-Transformer Network With Circular Feature Interaction for Acute Ischemic Stroke Lesion Segmentation on Non-Contrast CT Scans
    Kuang, Hulin
    Wang, Yahui
    Liu, Jin
    Wang, Jie
    Cao, Quanliang
    Hu, Bo
    Qiu, Wu
    Wang, Jianxin
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (06) : 2303 - 2316
  • [9] DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images
    Zou, Ziwei
    Zou, Beiji
    Kui, Xiaoyan
    Chen, Zhi
    Li, Yang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 250
  • [10] C3Net: Cross-Modal Feature Recalibrated, Cross-Scale Semantic Aggregated and Compact Network for Semantic Segmentation of Multi-Modal High-Resolution Aerial Images
    Cao, Zhiying
    Diao, Wenhui
    Sun, Xian
    Lyu, Xiaode
    Yan, Menglong
    Fu, Kun
    REMOTE SENSING, 2021, 13 (03)