Patch-based generative adversarial neural network models for head and neck MR-only planning

被引:72
|
作者
Klages, Peter [1 ]
Benslimane, Ilyes [1 ]
Riyahi, Sadegh [1 ]
Jiang, Jue [1 ]
Hunt, Margie [1 ]
Deasy, Joseph O. [1 ]
Veeraraghavan, Harini [1 ]
Tyagi, Neelam [1 ]
机构
[1] Mem Sloan Kettering Canc Ctr, Med Phys, 1275 York Ave, New York, NY 10021 USA
关键词
conditional generative adversarial networks (cGAN); CycleGAN; generative adversarial networks (GAN); MR-Guided Radiotherapy; pix2pix; synthetic CT generation; SYNTHETIC CT; RADIOTHERAPY; DELINEATION; IMAGES;
D O I
10.1002/mp.13927
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose To evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch-based synthetic computed tomography (sCT) generation for magnetic resonance (MR)-only treatment planning in head and neck (HN) cancer patients. Materials and methods Twenty-three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch-based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per-epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty-three cases corresponded to a curated dataset previously used for atlas-based sCT generation and were used for training with leave-two-out cross-validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in-phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images. Results The cross-validated MAEs for the whole-HN region using pix2pix and CycleGAN were 66.9 +/- 7.3 vs 82.3 +/- 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole-HN region MAEs were 94.0 +/- 10.6 and 102.9 +/- 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 +/- 6.3 and 132.8 +/- 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single-view non-overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to The dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR-only treatment planning for HN cancer. Our methods investigated for overlapping patch-based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.
引用
收藏
页码:626 / 642
页数:17
相关论文
共 50 条
  • [21] Improving Generative Adversarial Networks for Patch-Based Unpaired Image-to-Image Translation
    Boehland, Moritz
    Bruch, Roman
    Baeuerle, Simon
    Rettenberger, Luca
    Reischl, Markus
    IEEE ACCESS, 2023, 11 : 127895 - 127906
  • [22] Dosimetric Evaluation of Multi-sequence MR Image-based Synthetic CT Generation for Head and Neck MR-only Radiotherapy
    Li, Y.
    Qi, M.
    Wu, A.
    Zhou, L.
    Deng, X.
    Song, T.
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2020, 108 (03): : E291 - E292
  • [23] Segmentation of skin lesion using superpixel guided generative adversarial network with dual-stream patch-based discriminators
    Zhang, Jiahao
    Che, Miao
    Wu, Zongfei
    Liu, Yifei
    Liu, Xueyu
    Zhang, Hao
    Wu, Yongfei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 94
  • [24] MR-based synthetic CT with conditional Generative Adversarial Network for prostate RT planning
    Savenije, M. H. F.
    Maspero, M.
    Dinkla, A. M.
    Seevinck, P. R.
    Van den Berg, C. A. T.
    RADIOTHERAPY AND ONCOLOGY, 2018, 127 : S151 - S152
  • [25] MR-Based Atlas for Autosegmentation of Target and Normal Structure for MR-Only Planning in Prostate
    Tyagi, N.
    Fontenla, S.
    Apte, A.
    Ostergren, K.
    Tomer, C.
    Hunt, M.
    Zelefsky, M.
    Fontenlla, A.
    MEDICAL PHYSICS, 2017, 44 (06)
  • [26] Evaluation of Synthetic CT Image Generated Using a Neural Network for MR-Only Radiotherapy
    Tang, B.
    Fan, W.
    Wang, X.
    Li, J.
    Wang, P.
    Kang, S.
    Xiao, M.
    Orlandini, L. C.
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2020, 108 (03): : E296 - E297
  • [27] Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy
    Fu, Jie
    Singhrao, Kamal
    Cao, Minsong
    Yu, Victoria
    Santhanam, Anand P.
    Yang, Yingli
    Guo, Minghao
    Raldow, Ann C.
    Ruan, Dan
    Lewis, John H.
    BIOMEDICAL PHYSICS & ENGINEERING EXPRESS, 2020, 6 (01):
  • [28] PATCH-BASED FACE HALLUCINATION WITH MULTITASK DEEP NEURAL NETWORK
    Ko, Wei-Jen
    Chien, Shao-Yi
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2016,
  • [29] Patch-based Privacy Preserving Neural Network for Vision Tasks
    Mabuchi, Mitsuhiro
    Ishikawa, Tetsuya
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 1550 - 1559
  • [30] A CBCT based quality assurance technique for an MR-only radiotherapy workflow for head patients
    Irmak, S.
    Fetty, L.
    Nyholm, T.
    Heilemann, G.
    Georg, D.
    Nesvacil, N.
    Kuess, P.
    Lechner, W.
    RADIOTHERAPY AND ONCOLOGY, 2020, 152 : S945 - S945