LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces

被引:1
|
作者
Sarkar, Kripasindhu [1 ]
Buhler, Marcel C. [2 ]
Li, Gengyan [2 ]
Wang, Daoye [1 ]
Vicini, Delio [1 ]
Riviere, Jeremy [1 ]
Zhang, Yinda [3 ]
Orts-Escolano, Sergio [1 ]
Gotardo, Paulo [1 ]
Beeler, Thabo [1 ]
Meka, Abhimitra [4 ]
机构
[1] Google Inc, Zurich, Switzerland
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Google Inc, Mountain View, CA USA
[4] Google Inc, San Francisco, CA USA
关键词
Neural Rendering; Relighting; Relightable NeRF;
D O I
10.1145/3610548.3618210
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
High-fidelity, photorealistic 3D capture of a human face is a longstanding problem in computer graphics - the complex material of skin, intricate geometry of hair, and fine scale textural details make it challenging. Traditional techniques rely on very large and expensive capture rigs to reconstruct explicit mesh geometry and appearance maps, and are limited by the accuracy of hand-crafted reflectance models. More recent volumetric methods (e.g., NeRFs) have enabled view-synthesis and sometimes relighting by learning an implicit representation of the density and reflectance basis, but suffer from artifacts and blurriness due to the inherent ambiguities in volumetric modeling. These problems are further exacerbated when capturing with few cameras and light sources. We present a novel technique for high-quality capture of a human face for 3D view synthesis and relighting using a sparse, compact capture rig consisting of 15 cameras and 15 lights. Our method combines a neural volumetric representation with traditional mesh reconstruction from multiview stereo. The proxy geometry allows us to anchor the 3D density field to prevent artifacts and guide the disentanglement of intrinsic radiance components of the face appearance such as diffuse and specular reflectance, and incident radiance (shadowing) fields. Our hybrid representation significantly improves the state-of-the-art quality for arbitrarily dense renders of a face from desired camera viewpoint as well as environmental, directional, and near-field lighting.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] THE CLIENTS VIEW OF HIGH-QUALITY CARE IN SANTIAGO, CHILE
    VERA, H
    STUDIES IN FAMILY PLANNING, 1993, 24 (01) : 40 - 49
  • [22] CLINICIANS VIEW ON THE IMPORTANCE OF AUTOPSY FOR HIGH-QUALITY IN MEDICINE
    BERTRAM, E
    SCHWAIGER, M
    MEDIZINISCHE WELT, 1980, 31 (38): : 1339 - 1341
  • [23] Payer View of High-Quality Clinical Pathways for Cancer
    Newcomer, Lee N.
    Malin, Jennifer L.
    JOURNAL OF ONCOLOGY PRACTICE, 2017, 13 (03) : 148 - U203
  • [24] Achieving high-quality care: a view from NICE
    Leng, Gillian
    Partridge, Gemma
    HEART, 2018, 104 (01) : 10 - 15
  • [25] TuckerDNCaching: high-quality negative sampling with tucker decomposition
    Madushanka, Tiroshan
    Ichise, Ryutaro
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2023, 61 (03) : 739 - 763
  • [26] TuckerDNCaching: high-quality negative sampling with tucker decomposition
    Tiroshan Madushanka
    Ryutaro Ichise
    Journal of Intelligent Information Systems, 2023, 61 : 739 - 763
  • [27] Approaches to synthesis of high-quality diamond
    Xie, G.-Z.
    Wang, S.-B.
    Tang, Z.-J.
    Du, X.-D.
    Dai, L.-F.
    Chen, Z.-X.
    Kuangye Gongcheng/Mining and Metallurgical Engineering, 2001, 21 (02): : 64 - 66
  • [28] DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting
    Wu, Tong
    Sun, Jia-Mu
    Lai, Yu-Kun
    Gao, Lin
    PROCEEDINGS OF SIGGRAPH 2023 CONFERENCE PAPERS, SIGGRAPH 2023, 2023,
  • [29] Synthesis and characterization of high-quality double-walled carbon nanotubes by catalytic decomposition of alcohol
    Lyu, SC
    Lee, TJ
    Yang, CW
    Lee, CJ
    CHEMICAL COMMUNICATIONS, 2003, (12) : 1404 - 1405
  • [30] High-quality video view interpolation using a layered representation
    Zitnick, CL
    Kang, SB
    Uyttendaele, M
    Winder, S
    Szeliski, R
    ACM TRANSACTIONS ON GRAPHICS, 2004, 23 (03): : 600 - 608