ProLiF: Progressively-connected Light Field network for efficient view synthesis

被引:1
|
作者
Wang, Peng [1 ]
Liu, Yuan [1 ]
Lin, Guying [1 ]
Gu, Jiatao [2 ]
Liu, Lingjie [1 ,3 ]
Komura, Taku [1 ]
Wang, Wenping [4 ]
机构
[1] Univ Hong Kong, Hong Kong, Peoples R China
[2] Apple, Cupertino, CA USA
[3] Univ Penn, Philadelphia, PA USA
[4] Texas A&M Univ, PETR 416,400 Bizzell St, College Stn, TX 77843 USA
来源
COMPUTERS & GRAPHICS-UK | 2024年 / 120卷
关键词
Neural rendering; View synthesis; Light field;
D O I
10.1016/j.cag.2024.103913
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This paper presents a simple yet practical network architecture, ProLiF ( Pro gressively -connected Li ght F ield network), for the efficient differentiable view synthesis of complex forward -facing scenes in both the training and inference stages. The progress of view synthesis has advanced significantly due to the recent Neural Radiance Fields (NeRF). However, when training a NeRF, hundreds of network evaluations are required to synthesize a single pixel color, which is highly consuming of device memory and time. This issue prevents the differentiable rendering of a large patch of pixels in the training stage for semantic -level supervision, which is critical for many practical applications such as robust scene fitting, style transferring, and adversarial training. On the contrary, our proposed simple architecture ProLiF, encodes a two -plane light field, which allows rendering a large batch of rays in one training step for image- or patch -level losses. To keep the multi -view 3D consistency of the neural light field, we propose a progressive training strategy with novel regularization losses. We demonstrate that ProLiF has good compatibility with LPIPS loss to achieve robustness to varying light conditions, and NNFM loss as well as CLIP loss to edit the rendering style of the scene.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] LONG SHORT TERM MEMORY NETWORKS FOR LIGHT FIELD VIEW SYNTHESIS
    Hog, Matthieu
    Sabater, Neus
    Guillemot, Christine
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 724 - 728
  • [22] Linear View Synthesis Using a Dimensionality Gap Light Field Prior
    Levin, Anat
    Durand, Fredo
    2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 1831 - 1838
  • [23] Attention Mechanism-Based Light-Field View Synthesis
    Gul, M. Shahzeb Khan
    Mukati, M. Umair
    Batz, Michel
    Forchhammer, Soren
    Keinert, Joachim
    IEEE ACCESS, 2022, 10 : 7895 - 7913
  • [24] Real-time virtual view synthesis using light field
    Li Yao
    Yunjian Liu
    Weixin Xu
    EURASIP Journal on Image and Video Processing, 2016
  • [25] LIGHT FIELD COMPRESSION USING DEPTH IMAGE BASED VIEW SYNTHESIS
    Jiang, Xiaoran
    Le Pendu, Mikael
    Guillemot, Christine
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2017,
  • [26] Real-time virtual view synthesis using light field
    Yao, Li
    Liu, Yunjian
    Xu, Weixin
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2016, : 1 - 10
  • [27] Geometry-aware view reconstruction network for light field image compression
    Zhang, Youzhi
    Wan, Lifei
    Mao, Yifan
    Huang, Xinpeng
    Liu, Deyang
    SCIENTIFIC REPORTS, 2022, 12 (01):
  • [28] Geometry-aware view reconstruction network for light field image compression
    Youzhi Zhang
    Lifei Wan
    Yifan Mao
    Xinpeng Huang
    Deyang Liu
    Scientific Reports, 12 (1)
  • [29] SynthNet: A skip connected depthwise separable neural network for Novel View Synthesis of solid objects
    Anupama, V
    Kiran, A. Geetha
    RESULTS IN ENGINEERING, 2022, 13
  • [30] Learning Spherical Radiance Field for Efficient 360° Unbounded Novel View Synthesis
    Chen, Minglin
    Wang, Longguang
    Lei, Yinjie
    Dong, Zilong
    Guo, Yulan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 3722 - 3734