DARF: Depth-Aware Generalizable Neural Radiance Field

被引:0
|
作者
Shi, Yue [1 ]
Rong, Dingyi [1 ]
Chen, Chang [1 ]
Ma, Chaofan [1 ]
Ni, Bingbing [1 ]
Zhang, Wenjun [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
关键词
Neural radiance field; Novel-view rendering; Generalizable NeRF; Depth-aware dynamic sampling; APPEARANCE;
D O I
10.1016/j.displa.2025.102996
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural Radiance Field (NeRF) has revolutionized novel-view rendering tasks and achieved impressive results. However, the inefficient sampling and per-scene optimization hinder its wide applications. Though some generalizable NeRFs have been proposed, the rendering quality is unsatisfactory due to the lack of geometry and scene uniqueness. To address these issues, we propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy to perform efficient novel view rendering and unsupervised depth estimation on unseen scenes without per-scene optimization. Distinct from most existing generalizable NeRFs, our framework infers the unseen scenes on both pixel level and geometry level with only a few input images. By introducing a pre-trained depth estimation module to derive the depth prior, narrowing down the ray sampling interval to the proximity space of the estimated surface, and sampling in expectation maximum position, we preserve scene characteristics while learning common attributes for novel-view synthesis. Moreover, we introduce a Multi-level Semantic Consistency loss (MSC) to assist with more informative representation learning. Extensive experiments on indoor and outdoor datasets show that compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation. Our code is available on https://github.com/shiyue001/DARF.git.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Depth-aware image vectorization and editing
    Lu, Shufang
    Jiang, Wei
    Ding, Xuefeng
    Kaplan, Craig S.
    Jin, Xiaogang
    Gao, Fei
    Chen, Jiazhou
    VISUAL COMPUTER, 2019, 35 (6-8): : 1027 - 1039
  • [22] DEPTH-AWARE LAYERED EDGE FOR OBJECT PROPOSAL
    Liu, Jing
    Ren, Tongwei
    Bao, Bing-Kun
    Bei, Jia
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2016,
  • [23] Depth-Aware Display Using Synthetic Refocusing
    Suh, Sungjoo
    Yi, Kwonju
    Choi, Changkyu
    Park, Du Sik
    Kim, Chang Yeong
    IDW'11: PROCEEDINGS OF THE 18TH INTERNATIONAL DISPLAY WORKSHOPS, VOLS 1-3, 2011, : 429 - 432
  • [24] Adaptive depth-aware visual relationship detection
    Gan, Ming-Gang
    He, Yuxuan
    KNOWLEDGE-BASED SYSTEMS, 2022, 247
  • [25] ZoomShop: Depth-Aware Editing of Photographic Composition
    Liu, Sean J.
    Agrawala, Maneesh
    DiVerdi, Stephen
    Hertzmann, Aaron
    COMPUTER GRAPHICS FORUM, 2022, 41 (02) : 57 - 70
  • [26] DAnet: DEPTH-AWARE NETWORK FOR CROWD COUNTING
    Van-Su Huynh
    Hoang Tran
    Huang, Ching-Chun
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3001 - 3005
  • [27] Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields
    Liu, Tianqi
    Ye, Xinyi
    Shi, Min
    Huang, Zihao
    Pan, Zhiyu
    Peng, Zhan
    Cao, Zhiguo
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 7654 - 7663
  • [28] A Depth-Aware Network for Real-Time and High-Quality Neural Holography
    Zhang, Yunzhu
    Yu, Guangwei
    Chen, Chun
    He, Zhaoqin
    Wang, Jun
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 756 - 760
  • [29] Depth-Aware Object Tracking With a Conditional Variational Autoencoder
    Huang, Wenhui
    Gu, Jason
    Guo, Yinchen
    IEEE ACCESS, 2021, 9 : 94537 - 94547
  • [30] Learning depth-aware features for indoor scene understanding
    Chen, Suting
    Shao, Dongwei
    Zhang, Liangchen
    Zhang, Chuang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42573 - 42590