Patch-based generative adversarial neural network models for head and neck MR-only planning

被引:72
|
作者
Klages, Peter [1 ]
Benslimane, Ilyes [1 ]
Riyahi, Sadegh [1 ]
Jiang, Jue [1 ]
Hunt, Margie [1 ]
Deasy, Joseph O. [1 ]
Veeraraghavan, Harini [1 ]
Tyagi, Neelam [1 ]
机构
[1] Mem Sloan Kettering Canc Ctr, Med Phys, 1275 York Ave, New York, NY 10021 USA
关键词
conditional generative adversarial networks (cGAN); CycleGAN; generative adversarial networks (GAN); MR-Guided Radiotherapy; pix2pix; synthetic CT generation; SYNTHETIC CT; RADIOTHERAPY; DELINEATION; IMAGES;
D O I
10.1002/mp.13927
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose To evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch-based synthetic computed tomography (sCT) generation for magnetic resonance (MR)-only treatment planning in head and neck (HN) cancer patients. Materials and methods Twenty-three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch-based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per-epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty-three cases corresponded to a curated dataset previously used for atlas-based sCT generation and were used for training with leave-two-out cross-validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in-phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images. Results The cross-validated MAEs for the whole-HN region using pix2pix and CycleGAN were 66.9 +/- 7.3 vs 82.3 +/- 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole-HN region MAEs were 94.0 +/- 10.6 and 102.9 +/- 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 +/- 6.3 and 132.8 +/- 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single-view non-overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to The dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR-only treatment planning for HN cancer. Our methods investigated for overlapping patch-based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.
引用
收藏
页码:626 / 642
页数:17
相关论文
共 50 条
  • [31] Applicability of MR-only based radiation therapy treatment planning for intracranial target volumes
    Fleckenstein, J.
    Budjan, J.
    Arns, A.
    Steil, V.
    Schoenberg, S.
    Wenz, F.
    Attenberger, U.
    Ehmann, M.
    RADIOTHERAPY AND ONCOLOGY, 2018, 127 : S150 - S151
  • [32] Feasibility of Synthetic CT Generated From Multi-Sequence MR Images Using An Adversarial Network for MR-Only Radiotherapy
    Koike, Y.
    Akino, Y.
    Sumida, I.
    Shiomi, H.
    Mizuno, H.
    Yagi, M.
    Isohashi, F.
    Seo, Y.
    Suzuki, O.
    Ogawa, K.
    MEDICAL PHYSICS, 2019, 46 (06) : E185 - E185
  • [33] Dosimetric Evaluation of Synthetic-CT Generated by Multi-Sequence MR Images for Head and Neck MR-Only Radiotherapy
    Qi, M.
    Li, Y.
    Wu, A.
    Lu, X.
    Liu, Y.
    Zhou, L.
    Song, T.
    MEDICAL PHYSICS, 2020, 47 (06) : E269 - E269
  • [34] Data augmentation for patch-based OCT chorio-retinal segmentation using generative adversarial networks
    Kugelman, Jason
    Alonso-Caneiro, David
    Read, Scott A.
    Vincent, Stephen J.
    Chen, Fred K.
    Collins, Michael J.
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (13): : 7393 - 7408
  • [35] Data augmentation for patch-based OCT chorio-retinal segmentation using generative adversarial networks
    Jason Kugelman
    David Alonso-Caneiro
    Scott A. Read
    Stephen J. Vincent
    Fred K. Chen
    Michael J. Collins
    Neural Computing and Applications, 2021, 33 : 7393 - 7408
  • [36] Synthetic CT Generation From Multi-Sequence MR Images for Head and Neck MRI-Only Radiotherapy via Cycle-Consistent Generative Adversarial Network
    Peng, Y.
    Wu, S.
    Liu, Y.
    Chen, M.
    Miao, J.
    Zhao, C.
    Chen, S.
    Qi, Z.
    Deng, X.
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2021, 111 (03): : E530 - E530
  • [37] A feature invariant generative adversarial network for head and neck MRI/CT image synthesis
    Touati, Redha
    Le, William Trung
    Kadoury, Samuel
    PHYSICS IN MEDICINE AND BIOLOGY, 2021, 66 (09):
  • [38] A patch-based convolutional neural network for remote sensing image classification
    Sharma, Atharva
    Liu, Xiuwen
    Yang, Xiaojun
    Shi, Di
    NEURAL NETWORKS, 2017, 95 : 19 - 28
  • [39] M-SAN: a patch-based transferable adversarial attack using the multi-stack adversarial network
    Agrawal, Khushabu
    Bhatnagar, Charul
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [40] Dosimetric evaluation of synthetic CT image generated using a neural network for MR-only brain radiotherapy
    Tang, Bin
    Wu, Fan
    Fu, Yuchuan
    Wang, Xianliang
    Wang, Pei
    Orlandini, Lucia Clara
    Li, Jie
    Hou, Qing
    JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, 2021, 22 (03): : 55 - 62