Neural Sparse Voxel Fields

被引:0
|
作者
Liu, Lingjie [1 ]
Gu, Jiatao [2 ]
Lin, Kyaw Zaw [3 ]
Chua, Tat-Seng [3 ]
Theobalt, Christian [1 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Facebook AI Res, Menlo Pk, CA USA
[3] Natl Univ Singapore, Singapore, Singapore
关键词
REPRESENTATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Manufacturing Feature Recognition With a Sparse Voxel-Based Convolutional Neural Network
    Vatandoust, Farzad
    Yan, Xiaoliang
    Rosen, David
    Melkote, Shreyes N.
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2025, 25 (03)
  • [2] Efficient Sparse Voxel Octrees
    Laine, Samuli
    Karras, Tero
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2011, 17 (08) : 1048 - 1059
  • [3] Neural Light Fields with N-Dimensional Voxel Grids: A Performance Evaluation Across Voxel Grid Dimension
    Jeong, InGyu
    Jung, Hyunmin
    IEICE ELECTRONICS EXPRESS, 2025,
  • [4] Single-Mask Inpainting for Voxel-Based Neural Radiance Fields
    Chen, Jiafu
    Chu, Tianyi
    Sun, Jiakai
    Xing, Wei
    Zhao, Lei
    COMPUTER VISION-ECCV 2024, PT LVII, 2025, 15115 : 109 - 126
  • [5] SPARF: Neural Radiance Fields from Sparse and Noisy Poses
    Truong, Prune
    Rakotosaona, Marie-Julie
    Manhardt, Fabian
    Tombari, Federico
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4190 - 4200
  • [6] Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data
    Chu, Mengyu
    Liu, Lingjie
    Zheng, Quan
    Franz, Erik
    Seidel, Hans-Peter
    Theobalt, Christian
    Zayer, Rhaleb
    ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (04):
  • [7] Global illumination in sparse voxel octrees
    Max, Nelson
    VISUAL COMPUTER, 2022, 38 (04): : 1443 - 1456
  • [8] Global illumination in sparse voxel octrees
    Nelson Max
    The Visual Computer, 2022, 38 : 1443 - 1456
  • [9] High Resolution Sparse Voxel DAGs
    Kaempe, Viktor
    Sintorn, Erik
    Assarsson, Ulf
    ACM TRANSACTIONS ON GRAPHICS, 2013, 32 (04):
  • [10] S2NeRF: Neural Radiance Fields Training with Sparse Points and Sparse Views
    Zhang, Zhihong
    Wang, Wenjun
    Qi, Dexin
    Mei, Xuesong
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT II, 2025, 15202 : 101 - 116