Information fusion for fully automated segmentation of head and neck tumors from PET and CT images

被引:5
|
作者
Shiri, Isaac [1 ]
Amini, Mehdi [1 ]
Yousefirizi, Fereshteh [2 ]
Sadr, Alireza Vafaei [3 ,4 ]
Hajianfar, Ghasem [1 ]
Salimi, Yazdan [1 ]
Mansouri, Zahra [1 ]
Jenabi, Elnaz [5 ]
Maghsudi, Mehdi [6 ]
Mainta, Ismini [1 ]
Becker, Minerva [7 ]
Rahmim, Arman [2 ,8 ]
Zaidi, Habib [1 ,9 ,10 ,11 ,12 ]
机构
[1] Geneva Univ Hosp, Div Nucl Med & Mol Imaging, Geneva, Switzerland
[2] BC Canc Res Inst, Dept Integrat Oncol, Vancouver, BC, Canada
[3] RWTH Aachen Univ Hosp, Inst Pathol, Aachen, Germany
[4] Penn State Univ, Coll Med, Dept Publ Hlth Sci, Hershey, PA USA
[5] Univ Tehran Med Sci, Shariati Hosp, Res Ctr Nucl Med, Tehran, Iran
[6] Iran Univ Med Sci, Rajaie Cardiovasc Med & Res Ctr, Tehran, Iran
[7] Geneva Univ Hosp, Serv Radiol, Geneva, Switzerland
[8] Univ British Columbia, Dept Radiol & Phys, Vancouver, BC, Canada
[9] Univ Geneva, Geneva Univ Neuroctr, Geneva, Switzerland
[10] Univ Groningen, Univ Med Ctr Groningen, Dept Nucl Med & Mol Imaging, Groningen, Netherlands
[11] Univ Southern Denmark, Dept Nucl Med, Odense, Denmark
[12] Geneva Univ Hosp, Div Nucl Med & Mol Imaging, CH-1211 Geneva, Switzerland
基金
加拿大自然科学与工程研究理事会; 瑞士国家科学基金会;
关键词
deep learning; fusion; head and neck cancer; PET; CT; segmentation; VISIBLE IMAGES; FDG-PET; CLASSIFICATION; PERFORMANCE;
D O I
10.1002/mp.16615
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
BackgroundPET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking. PurposeOur study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions. MethodsThe current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 x 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated. ResultsIn single modalities, PET had a reasonable performance with a Dice score of 0.77 & PLUSMN; 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 & PLUSMN; 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUVpeak, SUVmean and SUVmedian. ConclusionPET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.
引用
收藏
页码:319 / 333
页数:15
相关论文
共 50 条
  • [21] 3D FULLY CONVOLUTIONAL NETWORKS FOR CO-SEGMENTATION OF TUMORS ON PET-CT IMAGES
    Zhong, Zisha
    Kim, Yusung
    Zhou, Leixin
    Plichta, Kristin
    Allen, Bryan
    Buatti, John
    Wu, Xiaodong
    2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 228 - 231
  • [22] Results from the autoPET challenge on fully automated lesion segmentation in oncologic PET/CT imaging
    Gatidis, Sergios
    Frueh, Marcel
    Fabritius, Matthias P.
    Gu, Sijing
    Nikolaou, Konstantin
    La Fougere, Christian
    Ye, Jin
    He, Junjun
    Peng, Yige
    Bi, Lei
    Ma, Jun
    Wang, Bo
    Zhang, Jia
    Huang, Yukun
    Heiliger, Lars
    Marinov, Zdravko
    Stiefelhagen, Rainer
    Egger, Jan
    Kleesiek, Jens
    Sibille, Ludovic
    Xiang, Lei
    Bendazzoli, Simone
    Astaraki, Mehdi
    Ingrisch, Michael
    Cyran, Clemens C.
    Kuestner, Thomas
    NATURE MACHINE INTELLIGENCE, 2024, 6 (11) : 1396 - 1405
  • [23] Automated segmentation of acetabulum and femoral head from 3-D CT images
    Zoroofi, RA
    Sato, Y
    Sasama, T
    Nishii, T
    Sugano, N
    Yonenobu, K
    Yoshikawa, H
    Ochi, T
    Tamura, S
    IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 2003, 7 (04): : 329 - 343
  • [24] Measurement of mouse head and neck tumors by automated analysis of CBCT images
    Van Court, Benjamin
    Neupert, Brooke
    Nguyen, Diemmy
    Ross, Richard
    Knitz, Michael W.
    Karam, Sana D.
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [25] Measurement of mouse head and neck tumors by automated analysis of CBCT images
    Benjamin Van Court
    Brooke Neupert
    Diemmy Nguyen
    Richard Ross
    Michael W. Knitz
    Sana D. Karam
    Scientific Reports, 13
  • [26] 3D Pathology Validation for Head-And-Neck Tumor Segmentation in PET/CT/MRI Images
    Lu, W.
    Li, H.
    Lewis, J.
    Thorstad, W.
    Low, D.
    Laforest, R.
    Nussenbaum, B.
    Zhu, J.
    Parikh, P.
    Wu, B.
    Biehl, K.
    MEDICAL PHYSICS, 2008, 35 (06) : 2679 - +
  • [27] SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images
    Li, Gary Y.
    Chen, Junyu
    Jang, Se-In
    Gong, Kuang
    Li, Quanzheng
    MEDICAL PHYSICS, 2024, 51 (03) : 2096 - 2107
  • [28] A Fully Automated Airway Segmentation Algorithm From Chest CT Images At Total Lung Capacity
    Nadeem, S.
    Jin, D.
    Hoffman, E. A.
    Saha, P. K.
    AMERICAN JOURNAL OF RESPIRATORY AND CRITICAL CARE MEDICINE, 2017, 195
  • [29] FUZZY ENTROPY AND MORPHOLOGY BASED FULLY AUTOMATED SEGMENTATION OF LUNGS FROM CT SCAN IMAGES
    Jaffar, M. Arfan
    Hussain, Ayyaz
    Mirza, Anwar M.
    Chaudhry, Asmatullah
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2009, 5 (12B): : 4993 - 5002
  • [30] Automated deep segmentation of healthy organs in PSMA PET/CT images
    Klyuzhin, Ivan
    Chausse, Guillaume
    Bloise, Ingrid
    Ferres, Juan Lavista
    Uribe, Carlos
    Rahmim, Arman
    JOURNAL OF NUCLEAR MEDICINE, 2021, 62