SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images

被引:5
|
作者
Li, Gary Y. [1 ,4 ]
Chen, Junyu [2 ,3 ]
Jang, Se-In [1 ]
Gong, Kuang [1 ]
Li, Quanzheng [1 ]
机构
[1] Harvard Med Sch, Ctr Adv Med Comp & Anal, Massachusetts Gen Hosp, Boston, MA USA
[2] Johns Hopkins Univ, Russell H Morgan Dept Radiol & Radiol Sci, Sch Med, Baltimore, MD USA
[3] Johns Hopkins Univ, Whiting Sch Engn, Dept Elect & Comp Engn, Baltimore, MD USA
[4] 100 Cambridge St, Boston, MA 02114 USA
关键词
network architecture; PEC/CT; Transformer; tumor segmentation;
D O I
10.1002/mp.16703
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
BackgroundRadiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks (DCNN) have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images.PurposeDespite the impressive representation capacity of vision transformer models, current vision transformer-based segmentation models still suffer from inconsistent and incorrect dense predictions when fed with multi-modal input data. We suspect that the power of their self-attention mechanism may be limited in extracting the complementary information that exists in multi-modal data. To this end, we propose a novel segmentation model, debuted, Cross-modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions.MethodsWe propose a novel architecture for cross-modal 3D semantic segmentation with two main components: (1) a cross-modal 3D Swin Transformer for integrating information from multiple modalities (PET and CT), and (2) a cross-modal shifted window attention block for learning complementary information from the modalities. To evaluate the efficacy of our approach, we conducted experiments and ablation studies on the HECKTOR 2021 challenge dataset. We compared our method against nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based models, including UNETR and Swin UNETR. The experiments employed a five-fold cross-validation setup using PET and CT images.ResultsEmpirical evidence demonstrates that our proposed method consistently outperforms the comparative techniques. This success can be attributed to the CMA module's capacity to enhance inter-modality feature representations between PET and CT during head-and-neck tumor segmentation. Notably, SwinCross consistently surpasses Swin UNETR across all five folds, showcasing its proficiency in learning multi-modal feature representations at varying resolutions through the cross-modal attention modules.ConclusionsWe introduced a cross-modal Swin Transformer for automating the delineation of head and neck tumors in PET and CT images. Our model incorporates a cross-modality attention module, enabling the exchange of features between modalities at multiple resolutions. The experimental results establish the superiority of our method in capturing improved inter-modality correlations between PET and CT for head-and-neck tumor segmentation. Furthermore, the proposed methodology holds applicability to other semantic segmentation tasks involving different imaging modalities like SPECT/CT or PET/MRI. Code:
引用
收藏
页码:2096 / 2107
页数:12
相关论文
共 50 条
  • [1] A Transformer Segmentation Model for PET/CT Images with Cross-modal, Cross-scale and Cross-dimensional
    Zhou T.
    Dang P.
    Lu H.
    Hou S.
    Peng C.
    Shi H.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (10): : 3529 - 3537
  • [2] 3D Pathology Validation for Head-And-Neck Tumor Segmentation in PET/CT/MRI Images
    Lu, W.
    Li, H.
    Lewis, J.
    Thorstad, W.
    Low, D.
    Laforest, R.
    Nussenbaum, B.
    Zhu, J.
    Parikh, P.
    Wu, B.
    Biehl, K.
    MEDICAL PHYSICS, 2008, 35 (06) : 2679 - +
  • [3] Weighted Fusion Transformer for Dual PET/CT Head and Neck Tumor Segmentation
    Mahdi, Mohammed A.
    Ahamad, Shahanawaj
    Saad, Sawsan A.
    Dafhalla, Alaa
    Qureshi, Rizwan
    Alqushaibi, Alawi
    IEEE ACCESS, 2024, 12 : 110905 - 110919
  • [4] Comparison of segmentation algorithms for organs at risk delineation on head-and-neck CT images
    Costea, M.
    Biston, M.
    Gregoire, V.
    Saruut, D.
    RADIOTHERAPY AND ONCOLOGY, 2021, 161 : S704 - S705
  • [5] Head and neck tumor segmentation in PET/CT: The HECKTOR challenge
    Oreiller, Valentin
    Andrearczyk, Vincent
    Jreige, Mario
    Boughdad, Sarah
    Elhalawani, Hesham
    Castelli, Joel
    Vallieres, Martin
    Zhu, Simeng
    Xie, Juanying
    Peng, Ying
    Iantsen, Andrei
    Hatt, Mathieu
    Yuan, Yading
    Ma, Jun
    Yang, Xiaoping
    Rao, Chinmay
    Pai, Suraj
    Ghimire, Kanchan
    Feng, Xue
    Naser, Mohamed A.
    Fuller, Clifton D.
    Yousefirizi, Fereshteh
    Rahmim, Arman
    Chen, Huai
    Wang, Lisheng
    Prior, John O.
    Depeursinge, Adrien
    MEDICAL IMAGE ANALYSIS, 2022, 77
  • [6] DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images
    Zou, Ziwei
    Zou, Beiji
    Kui, Xiaoyan
    Chen, Zhi
    Li, Yang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 250
  • [7] Evaluation of different algorithms for automatic segmentation of head-and-neck lymph nodes on CT images
    Costea, Madalina
    Zlate, Alexandra
    Serre, Anne-Agathe
    Racadot, Severine
    Baudier, Thomas
    Chabaud, Sylvie
    Gregoire, Vincent
    Sarrut, David
    Biston, Marie -Claude
    RADIOTHERAPY AND ONCOLOGY, 2023, 188
  • [8] DMCT-Net: dual modules convolution transformer network for head and neck tumor segmentation in PET/CT
    Wang, Jiao
    Peng, Yanjun
    Guo, Yanfei
    PHYSICS IN MEDICINE AND BIOLOGY, 2023, 68 (11):
  • [9] Segmentation of Cerebral Hemorrhage CT Images using Swin Transformer and HarDNet
    Piao, Zhegao
    Gu, Yeong Hyeon
    Yoo, Seong Joon
    Seong, Myoungho
    2023 INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN, 2023, : 522 - 525
  • [10] Cross-modal transformer with language query for referring image segmentation
    Zhang, Wenjing
    Tan, Quange
    Li, Pengxin
    Zhang, Qi
    Wang, Rong
    NEUROCOMPUTING, 2023, 536 : 191 - 205