An implicit neural deformable ray model for limited and sparse view-based spatiotemporal reconstruction

被引:0
|
作者
He, Yuanwei [1 ]
Ruan, Dan [1 ,2 ]
机构
[1] Univ Calif Los Angeles, Dept Radiat Oncol, Los Angeles, CA 90095 USA
[2] Univ Calif Los Angeles, Dept Bioengn, Los Angeles, CA USA
关键词
CT reconstruction; implicit neural field; motion estimation; ray tracing; COMPUTED-TOMOGRAPHY; IMAGE; INFORMATION;
D O I
10.1002/mp.17714
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Continuous spatiotemporal volumetric reconstruction is highly valuable, especially in radiation therapy, where tracking and calculating actual exposure during a treatment session is critical. This allows for accurate analysis of treatment outcomes, including patient response and toxicity in relation to delivered doses. However, continuous 4D imaging during radiotherapy is often unavailable due to radiation exposure concerns and hardware limitations. Most setups are limited to acquiring intermittent portal projections or images between treatment beams. Purpose: This study addresses the challenge of spatiotemporal reconstruction from limited views by reconstructing patient-specific volume with as low as 20 input views and continuous-time dynamic volumes from only two orthogonal x-ray projections. Methods: We introduce a novel implicit neural deformable ray (INDeR) model that uses a ray bundle coordinate system, embedding sparse view measurements into an implicit neural field. This method estimates real-time motion via efficient low-dimensional modulation, allowing for the deformation of ray bundles based on just two orthogonal x-ray projections. Results: The INDeR model demonstrates robust performance in image reconstruction and motion tracking, offering detailed visualization of structures like tumors and bronchial passages. With just 20 projection views, INDeR achieves a peak signal-to-noise ratio (PSNR) of 30.13 dB, outperforming methods such as FDK, PWLS-TV, and NAF by 13.93, 4.07, and 3.16 dB, respectively. When applied in real-time, the model consistently delivers a PSNR higher than 27.41 dB using only two orthogonal projections. Conclusion: The proposed INDeR framework successfully reconstructs continuous spatiotemporal representations from sparse views, achieving highly accurate reconstruction with as few as 20 projections and effective tracking with two orthogonal views in real-time. This approach shows great potential for anatomical monitoring in radiation therapy.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions
    Amiranashvili, Tamaz
    Luedke, David
    Li, Hongwei Bran
    Menze, Bjoern
    Zachow, Stefan
    INTERNATIONAL CONFERENCE ON MEDICAL IMAGING WITH DEEP LEARNING, VOL 172, 2022, 172 : 22 - 34
  • [22] Block-based multi-view classification via view-based ??2,??sparse representation and adaptive view fusion
    Wang, Zhi
    Lin, Qiang
    Chen, Yingyi
    Zhong, Ping
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [23] SPARSE-VIEW X-RAY CT RECONSTRUCTION USING CAD MODEL REGISTRATION
    Bussy, Victor
    Vienne, Caroline
    Escoda, Julie
    Kaftandjian, Valerie
    PROCEEDINGS OF 2022 49TH ANNUAL REVIEW OF PROGRESS IN QUANTITATIVE NONDESTRUCTIVE EVALUATION, QNDE2022, 2022,
  • [24] View-based model-driven software development with ModelJoin
    Erik Burger
    Jörg Henss
    Martin Küster
    Steffen Kruse
    Lucia Happe
    Software & Systems Modeling, 2016, 15 : 473 - 496
  • [25] A view-based information model for enterprise integration in process industries
    Li, Ping
    Lu, Ming L.
    Peng, Yuan S.
    Hua, Ben
    16TH EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING AND 9TH INTERNATIONAL SYMPOSIUM ON PROCESS SYSTEMS ENGINEERING, 2006, 21 : 2081 - 2086
  • [26] Coordinate Quantized Neural Implicit Representations for Multi-view Reconstruction
    Jiang, Sijia
    Hua, Jing
    Han, Zhizhong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18312 - 18323
  • [27] View-Based 3-D Model Retrieval: A Benchmark
    Liu, An-An
    Nie, Wei-Zhi
    Gao, Yue
    Su, Yu-Ting
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (03) : 916 - 928
  • [28] GPU-based radial view-based culling for continuous self-collision detection of deformable surfaces
    Sai-Keung Wong
    Yu-Chun Cheng
    The Visual Computer, 2016, 32 : 67 - 81
  • [29] A Deep-Learning Neural Network Based Reconstruction Algorithm for Sparse-View CT
    Herrera, I.
    Mandke, P.
    Feng, W.
    Cao, G.
    MEDICAL PHYSICS, 2020, 47 (06) : E508 - E508
  • [30] SPARSE-VIEW CT RECONSTRUCTION BASED ON MOJETTE TRANSFROM USING CONVOLUTIONAL NEURAL NETWORK
    Qu, Zhiping
    Jiang, Min
    Sun, Yi
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 1664 - 1668