DMCT-Net: dual modules convolution transformer network for head and neck tumor segmentation in PET/CT

被引:8
|
作者
Wang, Jiao [1 ]
Peng, Yanjun [1 ]
Guo, Yanfei [2 ]
机构
[1] Shandong Univ Sci & Technol, Coll Comp Sci & Engn, Qingdao 266590, Shandong, Peoples R China
[2] Qufu Normal Univ, Coll Comp Sci & Engn, Rizhao 276827, Peoples R China
来源
PHYSICS IN MEDICINE AND BIOLOGY | 2023年 / 68卷 / 11期
基金
中国国家自然科学基金;
关键词
head and neck; segmentation; convolution transformer block; squeeze and excitation pool; multi-attention fusion; auxiliary paths;
D O I
10.1088/1361-6560/acd29f
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Accurate segmentation of head and neck (H&N) tumors is critical in radiotherapy. However, the existing methods lack effective strategies to integrate local and global information, strong semantic information and context information, and spatial and channel features, which are effective clues to improve the accuracy of tumor segmentation. In this paper, we propose a novel method called dual modules convolution transformer network (DMCT-Net) for H&N tumor segmentation in the fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) images. Approach. The DMCT-Net consists of the convolution transformer block (CTB), the squeeze and excitation (SE) pool module, and the multi-attention fusion (MAF) module. First, the CTB is designed to capture the remote dependency and local multi-scale receptive field information by using the standard convolution, the dilated convolution, and the transformer operation. Second, to extract feature information from different angles, we construct the SE pool module, which not only extracts strong semantic features and context features simultaneously but also uses the SE normalization to adaptively fuse features and adjust feature distribution. Third, the MAF module is proposed to combine the global context information, channel information, and voxel-wise local spatial information. Besides, we adopt the up-sampling auxiliary paths to supplement the multi-scale information. Main results. The experimental results show that the method has better or more competitive segmentation performance than several advanced methods on three datasets. The best segmentation metric scores are as follows: DSC of 0.781, HD95 of 3.044, precision of 0.798, and sensitivity of 0.857. Comparative experiments based on bimodal and single modal indicate that bimodal input provides more sufficient and effective information for improving tumor segmentation performance. Ablation experiments verify the effectiveness and significance of each module. Significance. We propose a new network for 3D H&N tumor segmentation in FDG-PET/CT images, which achieves high accuracy.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images
    Zou, Ziwei
    Zou, Beiji
    Kui, Xiaoyan
    Chen, Zhi
    Li, Yang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 250
  • [22] Dual-branch multi-information aggregation network with transformer and convolution for polyp segmentation
    Zhang, Wenyu
    Lu, Fuxiang
    Su, Hongjing
    Hu, Yawen
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 168
  • [23] TC-Net: Dual coding network of Transformer and CNN for skin lesion segmentation
    Dong, Yuying
    Wang, Liejun
    Li, Yongming
    PLOS ONE, 2022, 17 (11):
  • [24] Fully Automated Head and Neck Malignant Lesions Segmentation using Multimodality PET/CT imaging and A Deep Convolutional Network
    Shiri, Isaac
    Amini, Mehdi
    Arabi, Hossein
    Zaidi, Habib
    JOURNAL OF NUCLEAR MEDICINE, 2021, 62
  • [25] Dual attention U-net for liver tumor segmentation in CT images
    Alirr, Omar Ibrahim
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2024, 19 (02)
  • [26] Automatic Segmentation of Head and Neck Tumors and Nodal Metastases in PET-CT scans
    Andrearczyk, Vincent
    Oreiller, Valentin
    Vallieres, Martin
    Castelli, Joel
    Elhalawani, Hesham
    Jreige, Mario
    Boughdad, Sarah
    Prior, John O.
    Depeursinge, Adrien
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 33 - 43
  • [27] Technology insight: PET and PET/CT in head and neck tumor staging and radiation therapy planning
    Frank, SJ
    Chao, KSC
    Schwartz, DL
    Weber, RS
    Apisarnthanarax, S
    Macapinlac, HA
    NATURE CLINICAL PRACTICE ONCOLOGY, 2005, 2 (10): : 526 - 533
  • [28] Technology Insight: PET and PET/CT in head and neck tumor staging and radiation therapy planning
    Steven J Frank
    KS Clifford Chao
    David L Schwartz
    Randal S Weber
    Smith Apisarnthanarax
    Homer A Macapinlac
    Nature Clinical Practice Oncology, 2005, 2 : 526 - 533
  • [29] Comparison of automatic tumour segmentation approaches for head and neck cancers in PET/CT images
    Groendahl, A. Rosvoll
    Mulstad, M.
    Moe, Y. Mardal
    Knudtsen, I. Skjei
    Torheim, T.
    Tomic, O.
    Indahl, U. G.
    Malinen, E.
    Dale, E.
    Futsaether, C. M.
    RADIOTHERAPY AND ONCOLOGY, 2019, 133 : S557 - S557
  • [30] TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images
    Liang, Junjie
    Yang, Cihui
    Zeng, Mengjie
    Wang, Xixi
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2022, 12 (04) : 2397 - 2415