An implicit neural deformable ray model for limited and sparse view-based spatiotemporal reconstruction

被引:0
|
作者
He, Yuanwei [1 ]
Ruan, Dan [1 ,2 ]
机构
[1] Univ Calif Los Angeles, Dept Radiat Oncol, Los Angeles, CA 90095 USA
[2] Univ Calif Los Angeles, Dept Bioengn, Los Angeles, CA USA
关键词
CT reconstruction; implicit neural field; motion estimation; ray tracing; COMPUTED-TOMOGRAPHY; IMAGE; INFORMATION;
D O I
10.1002/mp.17714
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Continuous spatiotemporal volumetric reconstruction is highly valuable, especially in radiation therapy, where tracking and calculating actual exposure during a treatment session is critical. This allows for accurate analysis of treatment outcomes, including patient response and toxicity in relation to delivered doses. However, continuous 4D imaging during radiotherapy is often unavailable due to radiation exposure concerns and hardware limitations. Most setups are limited to acquiring intermittent portal projections or images between treatment beams. Purpose: This study addresses the challenge of spatiotemporal reconstruction from limited views by reconstructing patient-specific volume with as low as 20 input views and continuous-time dynamic volumes from only two orthogonal x-ray projections. Methods: We introduce a novel implicit neural deformable ray (INDeR) model that uses a ray bundle coordinate system, embedding sparse view measurements into an implicit neural field. This method estimates real-time motion via efficient low-dimensional modulation, allowing for the deformation of ray bundles based on just two orthogonal x-ray projections. Results: The INDeR model demonstrates robust performance in image reconstruction and motion tracking, offering detailed visualization of structures like tumors and bronchial passages. With just 20 projection views, INDeR achieves a peak signal-to-noise ratio (PSNR) of 30.13 dB, outperforming methods such as FDK, PWLS-TV, and NAF by 13.93, 4.07, and 3.16 dB, respectively. When applied in real-time, the model consistently delivers a PSNR higher than 27.41 dB using only two orthogonal projections. Conclusion: The proposed INDeR framework successfully reconstructs continuous spatiotemporal representations from sparse views, achieving highly accurate reconstruction with as few as 20 projections and effective tracking with two orthogonal views in real-time. This approach shows great potential for anatomical monitoring in radiation therapy.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Implicit Neural Deformable Ray with Instance Query (INDRIK) for Real-Time Sparse-View-Driven Spatiotemporal CBCT Reconstruction
    He, Y.
    Liu, H.
    Ruan, D.
    MEDICAL PHYSICS, 2024, 51 (09) : 6553 - 6553
  • [2] Implicit Neural Deformation for Sparse-View Face Reconstruction
    Li, Moran
    Huang, Haibin
    Zheng, Yi
    Li, Mengtian
    Sang, Nong
    Ma, Chongyang
    COMPUTER GRAPHICS FORUM, 2022, 41 (07) : 601 - 610
  • [3] Sparse-View CT Reconstruction via Implicit Neural Intensity Functions
    Chen, Qiang
    Xiao, Guoqiang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT II, KSEM 2023, 2023, 14118 : 153 - 161
  • [4] VIEW-BASED APPEARANCE MODEL ONLINE LEARNING FOR 3D DEFORMABLE FACE TRACKING
    Lefevre, Stephanie
    Odobez, Jean-Marc
    VISAPP 2010: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 1, 2010, : 223 - 230
  • [5] Discriminative subspace learning with sparse representation view-based model for robust visual tracking
    Xie, Yuan
    Zhang, Wensheng
    Qu, Yanyun
    Zhang, Yinghua
    PATTERN RECOGNITION, 2014, 47 (03) : 1383 - 1394
  • [6] Exactly sparse delayed-state filters for view-based SLAM
    Eustice, Ryan M.
    Singh, Hanumant
    Leonard, John J.
    IEEE TRANSACTIONS ON ROBOTICS, 2006, 22 (06) : 1100 - 1114
  • [7] A view-based approach for the reconstruction of optical properties of turbid media
    Srivastava, Atul
    Patel, H. S.
    Gupta, P. K.
    CURRENT SCIENCE, 2007, 93 (03): : 359 - 365
  • [8] Optimization of sparse-view CT reconstruction based on convolutional neural network
    Lv, Liangliang
    Li, Chang
    Wei, Wenjing
    Sun, Shuyi
    Ren, Xiaoxuan
    Pan, Xiaodong
    Li, Gongping
    MEDICAL PHYSICS, 2025,
  • [9] DiffSVR: Differentiable Neural Implicit Surface Rendering for Single-View Reconstruction with Highly Sparse Depth Prior
    Komarichev, Artem
    Hua, Jing
    Zhong, Zichun
    COMPUTER-AIDED DESIGN, 2023, 164
  • [10] PE-INeR: prior-embedded implicit neural representation for sparse-view CBCT reconstruction
    Yang, Jiaying
    Xie, Shipeng
    APPLIED OPTICS, 2024, 63 (35) : 8907 - 8916