Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

被引:589
|
作者
Mildenhall, Ben [1 ]
Srinivasan, Pratul P. [1 ]
Ortiz-Cayon, Rodrigo [2 ]
Kalantari, Nima Khademi [3 ]
Ramamoorthi, Ravi [4 ]
Ng, Ren [1 ]
Kar, Abhishek [2 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Fyusion Inc, San Francisco, CA USA
[3] Texas A&M Univ, College Stn, TX 77843 USA
[4] Univ Calif San Diego, La Jolla, CA 92093 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2019年 / 38卷 / 04期
基金
美国国家科学基金会;
关键词
view synthesis; plenoptic sampling; light fields; image-based rendering; deep learning;
D O I
10.1145/3306346.3322980
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPD scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views. We demonstrate our approach's practicality with an augmented reality smartphone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
    Wenpeng Xing
    Jie Chen
    Yike Guo
    Machine Intelligence Research, 2023, 20 : 408 - 420
  • [2] Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
    Xing, Wenpeng
    Chen, Jie
    Guo, Yike
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (03) : 408 - 420
  • [3] Optimized sampling for view interpolation in light fields using local dictionaries
    Schedl, David C.
    Birklbauer, Clemens
    Bimber, Oliver
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2018, 168 : 93 - 103
  • [4] Virtual view synthesis for 3D light-field display based on feature reprojection and fusion
    Qi, Shuai
    Sang, Xinzhu
    Yan, Binbin
    Chen, Duo
    Wang, Peng
    Wang, Huachun
    Ye, Xiaoqian
    OPTICS COMMUNICATIONS, 2022, 519
  • [5] Fast and Accurate Light Field View Synthesis by Optimizing Input View Selection
    Wang, Xingzheng
    Zan, Yongqiang
    You, Senlin
    Deng, Yuanlong
    Li, Lihua
    MICROMACHINES, 2021, 12 (05)
  • [6] A Wide Field-of-View Light-Field Camera with Adjustable Multiplicity for Practical Applications
    Kim, Hyun Myung
    Yoo, Young Jin
    Lee, Jeong Min
    Song, Young Min
    SENSORS, 2022, 22 (09)
  • [7] Feature Field Fusion for few-shot novel view synthesis
    Li, Junting
    Zhou, Yanghong
    Fan, Jintu
    Shou, Dahua
    Xu, Sa
    Mok, P. Y.
    IMAGE AND VISION COMPUTING, 2025, 156
  • [8] A Fast View Synthesis Implementation Method for Light Field Applications
    Gao, Wei
    Zhou, Linjie
    Tao, Lvfang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (04)
  • [9] VIEW SYNTHESIS FOR LIGHT FIELD CODING USING DEPTH ESTIMATION
    Huang, Xinpeng
    An, Ping
    Shan, Liang
    Ma, Ran
    Shen, Liquan
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [10] Disparity Guided Texture Inpainting for Light Field View Synthesis
    Li, Yue
    Mathew, Reji
    Taubman, David
    2018 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2018, : 433 - 439