RIP-NeRF: Learning Rotation-Invariant Point-based Neural Radiance Field for Fine-grained Editing and Compositing

被引:2
|
作者
Wang, Yuze [1 ,2 ]
Wang, Junyi [1 ]
Qu, Yansong [1 ]
Qi, Yue [1 ,2 ,3 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[2] Beihang Univ, Qingdao Res Inst, Qingdao, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
scene editing; view synthesis; neural rendering; 3D deep learning; point-based representation;
D O I
10.1145/3591106.3592276
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural Radiance Field (NeRF) shows dramatic results in synthesising novel views. However, existing controllable and editable NeRF methods are still incapable of both fine-grained editing and cross-scene compositing, greatly limiting their creative editing as well as potential applications. When the radiance field is fine-grained edited and composited, a severe drawback is that varying the orientation of the corresponding explicit scaffold, such as point, mesh, volume, etc., may lead to the degradation of rendering quality. In this work, by taking the respective strengths of the implicit NeRF-based representation and the explicit point-based representation, we present a novel Rotation-Invariant Point-based NeRF (RIP-NeRF) for both fine-grained editing and cross-scene compositing of the radiance field. Specifically, we introduce a novel point-based radiance field representation to replace the Cartesian coordinate as the network input. This rotation-invariant representation is met by carefully designing a Neural Inverse Distance Weighting Interpolation (NIDWI) module to aggregate neural points, significantly improving the rendering quality for fine-grained editing. To achieve cross-scene compositing, we disentangle the rendering module and the neural point-based representation in NeRF. After simply manipulating the corresponding neural points, a cross-scene neural rendering module is applied to achieve controllable cross-scene compositing without retraining. The advantages of our RIP-NeRF on editing quality and capability are demonstrated by extensive editing and compositing experiments on room-scale real scenes and synthetic objects with complex geometry.
引用
收藏
页码:125 / 134
页数:10
相关论文
empty
未找到相关数据