Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors

被引:1
|
作者
Raut, P. [1 ,2 ,3 ]
Baldini, G. [4 ]
Schoeneck, M. [3 ]
Caldeira, L. [3 ]
机构
[1] Erasmus MC, Dept Pediat Pulmonol, Rotterdam, Netherlands
[2] Erasmus MC, Dept Radiol & Nucl Med, Rotterdam, Netherlands
[3] Univ Hosp Cologne, Inst Diagnost & Intervent Radiol, Cologne, Germany
[4] Univ Hosp Essen, Inst Diagnost & Intervent Radiol & Neuroradiol, Essen, Germany
来源
FRONTIERS IN RADIOLOGY | 2024年 / 3卷
关键词
deep learning; 3D convolutional neural network; generative adversarial network; synthetic images; multi-parametric MRI; brain tumors; segmentation; CONVOLUTIONAL NEURAL-NETWORKS; DEEP; CNN;
D O I
10.3389/fradi.2023.1336902
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T(1)w, T(2)w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T(2)w prediction NC, 0.74 +/- 0.30; ED, 0.81 +/- 0.15; CET, 0.84 +/- 0.21; WT, 0.90 +/- 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
引用
下载
收藏
页数:12
相关论文
共 50 条
  • [11] Brain Tumor Semantic Segmentation from MRI Image Using Deep Generative Adversarial Segmentation Network
    Cui, Shaoguo
    Liu, Chang
    Chen, Moyu
    Xiong, Shuyu
    JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS, 2019, 9 (09) : 1913 - 1919
  • [12] Generative Adversarial Network for Segmentation of Motion Affected Neonatal Brain MRI
    Khalili, N.
    Turk, E.
    Zreik, M.
    Viergever, M. A.
    Benders, M. J. N. L.
    Isgum, I.
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT III, 2019, 11766 : 320 - 328
  • [13] Synthetic augmentation for semantic segmentation of class imbalanced biomedical images: A data pair generative adversarial network approach
    Chai, Lu
    Wang, Zidong
    Chen, Jianqing
    Zhang, Guokai
    Alsaadi, Fawaz E.
    Alsaadi, Fuad E.
    Liu, Qinyuan
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
  • [14] Multi-scale Generative Adversarial Network for Automatic Sublingual Vein Segmentation
    Xiong, Qingyue
    Li, Xinlei
    Yang, Dawei
    Zhang, Wei
    Zhang, Ye
    Kong, Yajie
    Li, Fufeng
    Zhang, Wenqiang
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 851 - 856
  • [15] Multi-scale multi-class conditional generative adversarial network for handwritten character generation
    Jin Liu
    Chenkai Gu
    Jin Wang
    Geumran Youn
    Jeong-Uk Kim
    The Journal of Supercomputing, 2019, 75 : 1922 - 1940
  • [16] BTMF-GAN: A multi-modal MRI fusion generative adversarial network for brain tumors
    Liu, Xiao
    Chen, Hongyi
    Yao, Chong
    Xiang, Rui
    Zhou, Kun
    Du, Peng
    Liu, Weifan
    Liu, Jie
    Yu, Zekuan
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 157
  • [17] Multi-scale multi-class conditional generative adversarial network for handwritten character generation
    Liu, Jin
    Gu, Chenkai
    Wang, Jin
    Youn, Geumran
    Kim, Jeong-Uk
    JOURNAL OF SUPERCOMPUTING, 2019, 75 (04): : 1922 - 1940
  • [18] Breast cancer segmentation of mammographics images using generative adversarial network
    Swathi N.
    Christy Bobby T.
    Biomedical Sciences Instrumentation, 2021, 57 (02) : 247 - 255
  • [19] Automatic segmentation of the articular cartilage in knee MRI using a hierarchical multi-class classification scheme
    Folkesson, J
    Dam, E
    Olsen, OF
    Pettersen, P
    Christiansen, C
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2005, PT 1, 2005, 3749 : 327 - 334
  • [20] Extraction of urban multi-class from high-resolution images using pyramid generative adversarial networks
    Alshehhi, Rasha
    Marpu, Prashanth R.
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2021, 102