Deep Learning-based 3D Image Generation Using a Single 2D Projection Image

被引:0
|
作者
Lei, Yang
Tian, Zhen
Wang, Tonghe
Roper, Justin
Higgins, Kristin
Bradley, Jeffrey D.
Curran, Walter J.
Liu, Tian
Yang, Xiaofeng [1 ]
机构
[1] Emory Univ, Dept Radiat Oncol, Atlanta, GA 30322 USA
来源
基金
美国国家卫生研究院;
关键词
Lung stereotactic body radiation therapy; volumetric imaging; conditional generative adversarial network; TUMOR-LOCALIZATION; REAL-TIME; RECONSTRUCTION; MOTION;
D O I
10.1117/12.2580796
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Due to the inter-fraction and intra-fraction variation of respiratory motion, it is highly desired to provide real-time volumetric images during the treatment delivery of lung stereotactic body radiation therapy (SBRT) for accurate and active motion management. Motivated by this need, in this study we propose a novel generative adversarial network integrated with perceptual supervision to derive instantaneous 3D image from a single 2D kV projection. Our proposed network, named TransNet, consists of three modules, i.e., encoding, transformation, and decoding modules. Rather than only using image distance loss between the generated 3D image and the ground truth 3D CT image to supervise the network, perceptual loss in feature space is integrated into loss function to force the TransNet to yield accurate lung boundary. Adversarial loss is also used to improve the realism of the generated 3D image. We conducted a simulation study on 20 patient cases, who had undergone 4D-CT scan and received lung SBRT treatments in our institution, and evaluated the efficacy and consistency of our method at four different projection angles, i.e., 0 degrees, 30 degrees, 60 degrees and 90 degrees. For each 3D CT image of a breathing phase in the 4D CT image set, we simulated its 2D projections at these two angles. Then for each projection angle, a patient's 3D CT images of 9 phases and the corresponding 2D projection data were used for training, with the remaining phase used for testing. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and structural similarity index metric (S SIM) achieved by our method are 99.5 +/- 13.7HU, 23.4 +/- 2.3dB and 0.949 +/- 0.010, respectively. These results demonstrate the feasibility and efficacy of our method on generating a 3D image from single 2D projection, which provides a potential solution for in-treatment real-time on-board volumetric imaging to guide treatment delivery and ensure the effectiveness of lung SBRT treatment.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Generation of a 3D proximal femur shape from a single projection 2D radiographic image
    Langton, C. M.
    Pisharody, S.
    Keyak, J. H.
    OSTEOPOROSIS INTERNATIONAL, 2009, 20 (03) : 455 - 461
  • [2] Generation of a 3D proximal femur shape from a single projection 2D radiographic image
    C. M. Langton
    S. Pisharody
    J. H. Keyak
    Osteoporosis International, 2009, 20 : 455 - 461
  • [3] An Automatic 3D Scene Generation Pipeline Based on a Single 2D Image
    Cannavo, Alberto
    Bardella, Christian
    Semeraro, Lorenzo
    De Lorenzis, Federico
    Zhang, Congyi
    Jiang, Ying
    Lamberti, Fabrizio
    AUGMENTED REALITY, VIRTUAL REALITY, AND COMPUTER GRAPHICS, 2021, 12980 : 109 - 117
  • [4] Arbitrary image reinflation: A deep learning technique for recovering 3D photoproduct distributions from a single 2D projection
    Sparling, Chris
    Ruget, Alice
    Leach, Jonathan
    Townsend, Dave
    REVIEW OF SCIENTIFIC INSTRUMENTS, 2022, 93 (02):
  • [5] Image Projection Network: 3D to 2D Image Segmentation in OCTA Images
    Li, Mingchao
    Chen, Yerui
    Ji, Zexuan
    Xie, Keren
    Yuan, Songtao
    Chen, Qiang
    Li, Shuo
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (11) : 3343 - 3354
  • [6] Learning Adversarial 3D Model Generation with 2D Image Enhancer
    Zhu, Jing
    Xie, Jin
    Fang, Yi
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 7615 - 7622
  • [7] Deep-learning based 3D birefringence image generation using 2D multi-view holographic images
    Kim, Hakdong
    Jun, Taeheul
    Lee, Hyoung
    Chae, Byung Gyu
    Yoon, Minsung
    Kim, Cheongwon
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [8] Generation of 3D hair model from 2D image using image processing
    Kong, WM
    Takahashi, H
    Nakajima, M
    APPLICATIONS OF DIGITAL IMAGE PROCESSING XIX, 1996, 2847 : 303 - 311
  • [9] 2D/3D image registration using regression learning
    Chou, Chen-Rui
    Frederick, Brandon
    Mageras, Gig
    Chang, Sha
    Pizer, Stephen
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2013, 117 (09) : 1095 - 1106
  • [10] Generation of 3D image sequences from mixed 2D and 3D image sources
    Börcsök, J
    WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL XVII, PROCEEDINGS: CYBERNETICS AND INFORMATICS: CONCEPTS AND APPLICATIONS (PT II), 2001, : 386 - 388