Neural Sparse Voxel Fields

被引:0
|
作者
Liu, Lingjie [1 ]
Gu, Jiatao [2 ]
Lin, Kyaw Zaw [3 ]
Chua, Tat-Seng [3 ]
Theobalt, Christian [1 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Facebook AI Res, Menlo Pk, CA USA
[3] Natl Univ Singapore, Singapore, Singapore
关键词
REPRESENTATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs
    Sun, Jiakai
    Zhang, Zhanjie
    Chen, Jiafu
    Li, Guangyuan
    Ji, Boyan
    Zhao, Lei
    Xing, Wei
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1414 - 1422
  • [42] Sparse agent transformer for unified voxel and image feature extraction and fusion
    Zhang, Hong
    Wan, Jiaxu
    He, Ziqi
    Song, Jianbo
    Yang, Yifan
    Yuan, Ding
    INFORMATION FUSION, 2024, 110
  • [43] CodedVTR: Codebook-based Sparse Voxel Transformer with Geometric Guidance
    Zhao, Tianchen
    Zhang, Niansong
    Ning, Xuefei
    Wang, He
    Yi, Li
    Wang, Yu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1425 - 1434
  • [44] RD-NERF: NEURAL ROBUST DISTILLED FEATURE FIELDS FOR SPARSE-VIEW SCENE SEGMENTATION
    Ma, Yongjia
    Dou, Bin
    Zhang, Tianyu
    Yuan, Zejian
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 3470 - 3474
  • [45] SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis
    Xu, Kuo
    Li, Jie
    Li, Zhen-Qiang
    Cao, Yang-Jie
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2024, 39 (04) : 785 - 797
  • [46] Sparse Approximations of Fractional Matern Fields
    Roininen, Lassi
    Lasanen, Sari
    Orispaa, Mikko
    Sarkka, Simo
    SCANDINAVIAN JOURNAL OF STATISTICS, 2018, 45 (01) : 194 - 216
  • [47] Camera and LiDAR Fusion for Urban Scene Reconstruction and Novel View Synthesis via Voxel-Based Neural Radiance Fields
    Chen, Xuanzhu
    Song, Zhenbo
    Zhou, Jun
    Xie, Dong
    Lu, Jianfeng
    REMOTE SENSING, 2023, 15 (18)
  • [48] Learning of sparse auditory receptive fields
    Körding, KP
    König, P
    Klein, DJ
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 1103 - 1108
  • [49] Infilling sparse records of spatial fields
    Johns, CJ
    Nychka, D
    Kittel, TGF
    Daly, C
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2003, 98 (464) : 796 - 806
  • [50] Sparse motion fields for trajectory prediction
    Barata, Catarina
    Nascimento, Jacinto C.
    Lemos, Joao M.
    Marques, Jorge S.
    PATTERN RECOGNITION, 2021, 110