3D Scene Creation and Rendering via Rough Meshes: A Lighting Transfer Avenue

被引:0
|
作者
Cai, Bowen [1 ]
Li, Yujie [1 ]
Liang, Yuqin [1 ]
Jia, Rongfei [1 ]
Zhao, Binqiang [1 ]
Gong, Mingming [2 ]
Fu, Huan [1 ]
机构
[1] Alibaba Grp, Tao Technol Dept, Hangzhou 311121, Peoples R China
[2] Univ Melbourne, Sch Math & Stat, Melbourne, VIC 3052, Australia
关键词
3D scene creation; scene synthesis; lighting transfer; neural rendering; physically-based rendering;
D O I
10.1109/TPAMI.2024.3381982
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering. Due to the technical difficulty, one can only obtain rough 3D models (R3DMs) for most real objects using existing 3D reconstruction techniques. As a result, physically-based rendering (PBR) would render low-quality images or videos for scenes that are constructed by R3DMs. One promising solution would be representing real-world objects as Neural Fields such as NeRFs, which are able to generate photo-realistic renderings of an object under desired viewpoints. However, a drawback is that the synthesized views through Neural Fields Rendering (NFR) cannot reflect the simulated lighting details on R3DMs in PBR pipelines, especially when object interactions in the 3D scene creation cause local shadows. To solve this dilemma, we propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other. LighTNet reasons about a simplified image composition model, remedies the uneven surface issue caused by R3DMs, and is empowered by several perceptual-motivated constraints and a new Lab angle loss which enhances the contrast between lighting strength and colors. Comparisons demonstrate that LighTNet is superior in synthesizing impressive lighting, and is promising in pushing NFR further in practical 3D modeling workflows.
引用
收藏
页码:6292 / 6305
页数:14
相关论文
共 50 条
  • [1] Pencil rendering on 3D meshes using convolution
    Kwon, Yunmi
    Yang, Heekyung
    Min, Kyungha
    COMPUTERS & GRAPHICS-UK, 2012, 36 (08): : 930 - 944
  • [2] A rendering-efficient progressive transmission of 3D meshes
    Kim, BU
    Park, WC
    Yang, SB
    Neelamkavil, F
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2004, E87D (10): : 2399 - 2407
  • [3] Optimized 3D Scene Rendering on Projection-Based 3D Displays
    Doronin, Oleksii
    Bregovic, Robert
    Gotchev, Atanas
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 580 - 584
  • [4] 3D skeleton transfer for meshes and clouds
    Seylan, Caglar
    Sahillioglu, Yusuf
    GRAPHICAL MODELS, 2019, 105
  • [5] Mosaic-based 3D scene representation and rendering
    Zhu, Zhigang
    Hanson, Allen R.
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2006, 21 (09) : 739 - 754
  • [6] Towards 3D Scene Understanding Using Differentiable Rendering
    Periyasamy A.S.
    Behnke S.
    SN Computer Science, 4 (3)
  • [7] Neural style transfer for 3D meshes
    Kang, Hongyuan
    Dong, Xiao
    Cao, Juan
    Chen, Zhonggui
    GRAPHICAL MODELS, 2023, 129
  • [8] Mosaic-based 3D scene representation and rendering
    Zhu, ZG
    Hanson, AR
    2005 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), VOLS 1-5, 2005, : 1209 - 1212
  • [9] Feature-guided convolution for salient rendering of 3D meshes
    Min, Kyungha
    INTERNATIONAL JOURNAL OF ENGINEERING SYSTEMS MODELLING AND SIMULATION, 2015, 7 (01) : 1 - 5
  • [10] Creation of 3D Scene from Raw Text
    Dessai, Sneha N.
    Dhanaraj, Rachel
    2016 IEEE INTERNATIONAL CONFERENCE ON RECENT TRENDS IN ELECTRONICS, INFORMATION & COMMUNICATION TECHNOLOGY (RTEICT), 2016, : 1466 - 1469