Fully Convolutional Transformer-Based GAN for Cross-Modality CT to PET Image Synthesis

被引:1
|
作者
Li, Yuemei [1 ]
Zheng, Qiang [1 ]
Wang, Yi [1 ]
Zhou, Yongkang [2 ]
Zhang, Yang [2 ]
Song, Yipeng [4 ]
Jiang, Wei [3 ,4 ,5 ,6 ]
机构
[1] Yantai Univ, Sch Comp & Control Engn, Yantai 264205, Peoples R China
[2] Zhongshan Hosp, Dept Radiat Oncol, Shanghai 200032, Peoples R China
[3] Tianjin Univ, Sch Precis Instrument & Optoelect Engn, Tianjin 300072, Peoples R China
[4] Yantai Yuhuangding Hosp, Dept Radiotherapy, Yantai 264000, Peoples R China
[5] Tianjin Univ, Acad Med Engn & Translat Med, Sch Precis Instrument & Optoelect Engn, Dept Biomed Engn, Tianjin 300072, Peoples R China
[6] Qingdao Univ, Yantai Yuhuangding Hosp, Dept Radiotherapy, 20 Yuhuangding East Rd, Qingdao 264000, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; GAN; CT; PET; Image synthesis;
D O I
10.1007/978-3-031-45087-7_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Positron emission tomography (PET) imaging is widely used for staging and monitoring the treatment of lung cancer, but the expensive cost of PET imaging equipment and the numerous contraindications for the examination present significant challenges to individuals and institutions seeking PET scans. Cross-modality image synthesis could alleviate this problem, but existing method still have deficiencies. Such as, pix2pix mode has stringent data requirements, while cycleGAN mode, although it can address this issue, does not produce a unique optimal solution. Additionally, models with convolutional neural network backbone still exhibit limitations when dealing with medical images containing contextual relationships between healthy and pathological tissues. In this paper, we propose a generative adversarial network (GAN) method based on a fully convolutional transformer and residual blocks called C2P-GAN for cross-modality synthesis of PET images from CT images. It composed of a generator and a discriminator that compete with each other, as well as a registration network that can eliminate noise interference. The generator integrates convolutional networks that excel in capturing local image features with the transformer that is sensitive to global contextual information. In the current dataset of 23 pairs of lung cancer patients collected, quantitative and qualitative experimental results demonstrate the superiority of the proposed method relative to competing methods and have great potential for clinical applications.
引用
收藏
页码:101 / 109
页数:9
相关论文
共 50 条
  • [1] Transformer-Based Visual Grounding with Cross-Modality Interaction
    Li, Kun
    Li, Jiaxiu
    Guo, Dan
    Yang, Xun
    Wang, Meng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)
  • [2] XmoNet: A Fully Convolutional Network for Cross-Modality MR Image Inference
    Bano, Sophia
    Asad, Muhammad
    Fetit, Ahmed E.
    Rekik, Islem
    PREDICTIVE INTELLIGENCE IN MEDICINE, 2018, 11121 : 129 - 137
  • [3] DSG-GAN:A dual-stage-generator-based GAN for cross-modality synthesis from PET to CT
    Wang H.
    Wang X.
    Liu F.
    Zhang G.
    Zhang G.
    Zhang Q.
    Lang M.L.
    Computers in Biology and Medicine, 2024, 172
  • [4] Image-level supervision and self-training for transformer-based cross-modality tumor segmentation
    d'Assier, Malo Alefsen de Boisredon
    Portafaix, Aloys
    Vorontsov, Eugene
    Le, William Trung
    Kadoury, Samuel
    MEDICAL IMAGE ANALYSIS, 2024, 97
  • [5] Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection
    Ben-Cohen, Avi
    Klang, Eyal
    Raskin, Stephen P.
    Soffer, Shelly
    Ben-Haim, Simona
    Konen, Eli
    Amitai, Michal Marianne
    Greenspan, Hayit
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 78 : 186 - 194
  • [6] ST-GAN: A Swin Transformer-Based Generative Adversarial Network for Unsupervised Domain Adaptation of Cross-Modality Cardiac Segmentation
    Zhang, Yifan
    Wang, Yonghui
    Xu, Lisheng
    Yao, Yudong
    Qian, Wei
    Qi, Lin
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (02) : 893 - 904
  • [7] Transformer-based cross-modality interaction guidance network for RGB-T salient object detection
    Luo, Jincheng
    Li, Yongjun
    Li, Bo
    Zhang, Xinru
    Li, Chaoyue
    Chenjin, Zhimin
    He, Jingyi
    Liang, Yifei
    NEUROCOMPUTING, 2024, 600
  • [8] Cross-Modality Fourier Feature for Medical Image Synthesis
    Ma, Mei
    Lin, Ling
    Wang, Heng
    Li, Zhendong
    Liu, Hao
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1475 - 1480
  • [9] Cross-modality image fusion of FDG-PET/CT and CECT for the diagnosing and staging of pancreatic cancer
    Jian, Zhang
    Cheng, Chao
    Zuo, Changjing
    JOURNAL OF NUCLEAR MEDICINE, 2014, 55
  • [10] Cross-modality PET/CT and contrast-enhanced CT imaging for pancreatic cancer
    Jian Zhang
    Chang-Jing Zuo
    Ning-Yang Jia
    Jian-Hua Wang
    Sheng-Ping Hu
    Zhong-Fei Yu
    Yuan Zheng
    An-Yu Zhang
    Xiao-Yuan Feng
    World Journal of Gastroenterology, 2015, (10) : 2988 - 2996