ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields

被引:14
|
作者
Somraj, Nagabhushan [1 ]
Soundararajan, Rajiv [1 ]
机构
[1] Indian Inst Sci, Bengaluru, India
关键词
neural rendering; novel view synthesis; sparse input NeRF; visibility prior; plane sweep volumes;
D O I
10.1145/3588432.3591539
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural radiance fields (NeRF) have achieved impressive performances in view synthesis by encoding neural representations of a scene. However, NeRFs require hundreds of images per scene to synthesize photo-realistic novel views. Training them on sparse input views leads to overfitting and incorrect scene depth estimation resulting in artifacts in the rendered novel views. Sparse input NeRFs were recently regularized by providing dense depth estimated from pre-trained networks as supervision, to achieve improved performance over sparse depth constraints. However, we find that such depth priors may be inaccurate due to generalization issues. Instead, we hypothesize that the visibility of pixels in different input views can be more reliably estimated to provide dense supervision. In this regard, we compute a visibility prior through the use of plane sweep volumes, which does not require any pre-training. By regularizing the NeRF training with the visibility prior, we successfully train the NeRF with few input views. We reformulate the NeRF to also directly output the visibility of a 3D point from a given viewpoint to reduce the training time with the visibility constraint. On multiple datasets, our model outperforms the competing sparse input NeRF models including those that use learned priors. The source code for our model can be found on our project page: https: //nagabhushansn95.github.io/publications/2023/ViP-NeRF.html.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] NeRF-QA: Neural Radiance Fields Quality Assessment Database
    Martin, Pedro
    Rodrigues, Antonio
    Ascenso, Joao
    Queluz, Maria Paula
    2023 15TH INTERNATIONAL CONFERENCE ON QUALITY OF MULTIMEDIA EXPERIENCE, QOMEX, 2023, : 107 - 110
  • [32] Tetra-NeRF: Representing Neural Radiance Fields Using Tetrahedra
    Kulhanek, Jonas
    Sattler, Torsten
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18412 - 18423
  • [33] CaSE-NeRF: Camera Settings Editing of Neural Radiance Fields
    Sun, Ciliang
    Li, Yuqi
    Li, Jiabao
    Wang, Chong
    Dai, Xinmiao
    ADVANCES IN COMPUTER GRAPHICS, CGI 2023, PT II, 2024, 14496 : 95 - 107
  • [34] NeRF-DA: Neural Radiance Fields Deblurring With Active Learning
    Hong, Sejun
    Kim, Eunwoo
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 261 - 265
  • [35] BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields
    Wang, Peng
    Zhao, Lingzhe
    Ma, Ruijie
    Liu, Peidong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4170 - 4179
  • [36] FoV-NeRF: Foveated Neural Radiance Fields for Virtual Reality
    Deng, Nianchen
    He, Zhenyi
    Ye, Jiannan
    Duinkharjav, Budmonde
    Chakravarthula, Praneeth
    Yang, Xubo
    Sun, Qi
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (11) : 3854 - 3864
  • [37] Point-NeRF: Point-based Neural Radiance Fields
    Xu, Qiangeng
    Xu, Zexiang
    Philip, Julien
    Bi, Sai
    Shu, Zhixin
    Sunkavalli, Kalyan
    Neumann, Ulrich
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5428 - 5438
  • [38] NeRF-Art: Text-Driven Neural Radiance Fields Stylization
    Wang, Can
    Jiang, Ruixiang
    Chai, Menglei
    He, Mingming
    Chen, Dongdong
    Liao, Jing
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2024, 30 (08) : 4983 - 4996
  • [39] NeRF-SR: High Quality Neural Radiance Fields using Supersampling
    Wang, Chen
    Wu, Xian
    Guo, Yuan-Chen
    Zhang, Song-Hai
    Tai, Yu-Wing
    Hu, Shi-Min
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6445 - 6454
  • [40] Ced-NeRF: A Compact and Efficient Method for Dynamic Neural Radiance Fields
    Lin, Youtian
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3504 - 3512