DARF: Depth-Aware Generalizable Neural Radiance Field

被引:0
|
作者
Shi, Yue [1 ]
Rong, Dingyi [1 ]
Chen, Chang [1 ]
Ma, Chaofan [1 ]
Ni, Bingbing [1 ]
Zhang, Wenjun [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
关键词
Neural radiance field; Novel-view rendering; Generalizable NeRF; Depth-aware dynamic sampling; APPEARANCE;
D O I
10.1016/j.displa.2025.102996
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural Radiance Field (NeRF) has revolutionized novel-view rendering tasks and achieved impressive results. However, the inefficient sampling and per-scene optimization hinder its wide applications. Though some generalizable NeRFs have been proposed, the rendering quality is unsatisfactory due to the lack of geometry and scene uniqueness. To address these issues, we propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy to perform efficient novel view rendering and unsupervised depth estimation on unseen scenes without per-scene optimization. Distinct from most existing generalizable NeRFs, our framework infers the unseen scenes on both pixel level and geometry level with only a few input images. By introducing a pre-trained depth estimation module to derive the depth prior, narrowing down the ray sampling interval to the proximity space of the estimated surface, and sampling in expectation maximum position, we preserve scene characteristics while learning common attributes for novel-view synthesis. Moreover, we introduce a Multi-level Semantic Consistency loss (MSC) to assist with more informative representation learning. Extensive experiments on indoor and outdoor datasets show that compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation. Our code is available on https://github.com/shiyue001/DARF.git.
引用
收藏
页数:10
相关论文
共 50 条
  • [11] ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance Field
    Ni, Zhangkai
    Yang, Peiqi
    Yang, Wenhan
    Wang, Hanli
    Ma, Lin
    Kwong, Sam
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4325 - 4333
  • [12] Depth-Aware Stereo Video Retargeting
    Li, Bing
    Lin, Chia-Wen
    Shi, Boxin
    Huang, Tiejun
    Gao, Wen
    Kuo, C. -C. Jay
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6517 - 6525
  • [13] Depth-Aware Unpaired Video Dehazing
    Yang, Yang
    Guo, Chun-Le
    Guo, Xiaojie
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 (2388-2403) : 2388 - 2403
  • [14] Rugularizing generalizable neural radiance field with limited-view images
    Sun, Wei
    Cui, Ruijia
    Wang, Qianzhou
    Kong, Xianguang
    Zhang, Yanning
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [15] Depth-Aware Image Seam Carving
    Shen, Jianbing
    Wang, Dapeng
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (05) : 1453 - 1461
  • [16] Depth-Aware Image Colorization Network
    Chu, Wei-Ta
    Hsu, Yu-Ting
    PROCEEDINGS OF THE 2018 WORKSHOP ON UNDERSTANDING SUBJECTIVE ATTRIBUTES OF DATA, WITH THE FOCUS ON EVOKED EMOTIONS (EE-USAD'18), 2018, : 17 - 23
  • [17] Depth-aware image vectorization and editing
    Shufang Lu
    Wei Jiang
    Xuefeng Ding
    Craig S. Kaplan
    Xiaogang Jin
    Fei Gao
    Jiazhou Chen
    The Visual Computer, 2019, 35 : 1027 - 1039
  • [18] Depth-Aware Video Frame Interpolation
    Bao, Wenbo
    Lai, Wei-Sheng
    Ma, Chao
    Zhang, Xiaoyun
    Gao, Zhiyong
    Yang, Ming-Hsuan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3698 - 3707
  • [19] Depth-Aware Endoscopic Video Inpainting
    Zhang, Francis Xiatian
    Chen, Shuang
    Xie, Xianghua
    Shum, Hubert P. H.
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT VI, 2024, 15006 : 143 - 153
  • [20] DEPTH-AWARE OBJECT INSTANCE SEGMENTATION
    Ye, Linwei
    Liu, Zhi
    Wang, Yang
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 325 - 329