Multi-View Stereo and Depth Priors Guided NeRF for View Synthesis

被引:0
|
作者
Deng, Wang [1 ]
Zhang, Xuetao [1 ]
Guo, Yu [2 ]
Lu, Zheng [3 ]
机构
[1] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Software Engn, Xian, Peoples R China
[3] China Acad Space Technol, Inst Remote Sensing Satellite, Xian, Peoples R China
关键词
View Synthesis; Neural Radiance Fields; Multi-View Stereo; Depth Priors; FIELDS;
D O I
10.1109/ICPR56361.2022.9956249
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a new framework for view synthesis of novel view based on Neural Radiance Fields(NeRF). We aim to address two main limitations of NeRF. Firstly, we propose to combine multi-view stereo into NeRF to help construct general neural radiance fields across different scenes. Specifically, We build a MVS-Encoding Feature Volume with average group-wise correlation to aggregate the multi-view appearance and geometry feature for every source view. And then we use an MLP to encode neural radiance fields by using the scene-dependent features interpolated from the MVS-Encoding Feature Volumes. This makes our model can be applied to other unseen scenes without any per-scene fine-tuning, and render realistic images with few images. If more training images are provided, our method can be fine-tuned quickly to render more realistic images. In fine-tuning phase, we propose a depth priors guided sampling method, which can make the model represent more accurate geometry for corresponding scenes and so render high-quality images of novel view. We evaluate our method on three common datasets. The experiment results show that our method performs better than other baselines, neither without or with fine-tuning. And the depth priors guided sampling method can be easily applied on other methods based on Neural Radiance Fields to further improve the quality of rendered images.
引用
收藏
页码:3922 / 3928
页数:7
相关论文
共 50 条
  • [1] Multi-View Guided Multi-View Stereo
    Poggi, Matteo
    Conti, Andrea
    Mattoccia, Stefano
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 8391 - 8398
  • [2] MULTI-VIEW STEREO WITH SEMANTIC PRIORS
    Stathopoulou, E. -K.
    Remondino, F.
    [J]. 27TH CIPA INTERNATIONAL SYMPOSIUM: DOCUMENTING THE PAST FOR A BETTER FUTURE, 2019, 42-2 (W15): : 1135 - 1140
  • [3] Multi-view stereo-regulated NeRF for urban scene novel view synthesis
    Bian, Feihu
    Xiong, Suya
    Yi, Ran
    Ma, Lizhuang
    [J]. VISUAL COMPUTER, 2024,
  • [4] Uncertainty Guided Multi-View Stereo Network for Depth Estimation
    Su, Wanjuan
    Xu, Qingshan
    Tao, Wenbing
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (11) : 7796 - 7808
  • [5] Continuous Depth Estimation for Multi-view Stereo
    Liu, Yebin
    Cao, Xun
    Dai, Qionghai
    Xu, Wenli
    [J]. CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2009, : 2121 - 2128
  • [6] MULTI-VIEW IMAGE FEATURE CORRELATION GUIDED COST AGGREGATION FOR MULTI-VIEW STEREO
    Lai, Yawen
    Qiu, Ke
    Wang, Ronggang
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2021,
  • [7] Context-Guided Multi-view Stereo with Depth Back-Projection
    Feng, Tianxing
    Zhang, Zhe
    Xiong, Kaiqiang
    Wang, Ronggang
    [J]. MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 : 91 - 102
  • [8] NeuralMVS: Bridging Multi-View Stereo and Novel View Synthesis
    Rosu, Radu Alexandru
    Behnke, Sven
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [9] Learning Depth for Multi-View Stereo with Adversarial Training
    Wang, Liang
    Fan, Deqiao
    Li, Jianshu
    [J]. PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 1674 - 1679
  • [10] Adaptive depth estimation for pyramid multi-view stereo
    Liao, Jie
    Fu, Yanping
    Yan, Qingan
    Luo, Fei
    Xiao, Chunxia
    [J]. COMPUTERS & GRAPHICS-UK, 2021, 97 : 268 - 278