Real-time multi-view background matting for 3D light field video

被引:0
|
作者
Chen, Junfeng [1 ]
Sang, Xinzhu [1 ]
Yuan, Jinhui [1 ]
Yan, Binbin [1 ]
Chen, Duo [1 ]
Wang, Peng [1 ]
Yang, Zeyuan [1 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Informat Photon & Opt Commun, Beijing, Peoples R China
来源
关键词
Background matting; 3D light field; Deep learning; Multi-view images;
D O I
10.1117/12.2606235
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Matting is a method to extract foreground objects of arbitrary shape from an image. In the field of 3D display, matting technology is of great significance. Through the study of this technology, we can extract high-quality target foreground, and then reduce unnecessary stereo matching calculation and improve the effect of 3D display. This paper primarily studies the human target in 3D light field, and proposes a real-time multi-view background matting algorithm based on deep learning. Three-dimensional video live broadcast puts forward high requirements for the real-time performance of the matting algorithm. We pre-compose a group of multi-view images taken at the same time into a multi-view combined image. The network directly carries on the background matting to the multi-view combined image and outputs a group of foreground images at one time. Because the background of the multi-view combined image is not holistic, a pre-photographed background picture without human is added to the input to assist the network for learning. In addition, we add a channel subtraction module to help the network better understand the role of the original image and background image in the matting task. The method in this paper is tested on our multi-view data set. For pictures with different background complexity, it can run about 65 frames per second and maintain a relatively stable accuracy. The method can efficiently generate multi-view matting results and meet the requirements of 3D video live broadcast.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Multi-view 3D Light Field Video Broadcast System
    Yao, Ben
    Sang, XinZhu
    Wang, Peng
    Chen, Duo
    Yan, BinBin
    Yuan, JinHui
    [J]. OPTICS FRONTIER ONLINE 2020: OPTICS IMAGING AND DISPLAY, 2020, 11571
  • [2] Multicast of real-time multi-view video
    Zuo, Li
    Luo, Jian Guang
    Cai, Hua
    Li, Jiang
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, : 1225 - 1228
  • [3] Accelerated Real-Time Reconstruction of 3D Deformable Objects from Multi-view Video Channels
    Graf, Holger
    Hazke, Leon
    Kahn, Svenja
    Malerczyk, Cornelius
    [J]. DIGITAL HUMAN MODELING, 2011, 6777 : 282 - 291
  • [4] Real-time view interpolation system for super multi-view 3D display
    Hamaguchi, T
    Fujii, T
    Kajiki, Y
    Honda, T
    [J]. STEREOSCOPIC DISPLAYS AND VIRTUAL REALITY SYSTEMS VIII, 2001, 4297 : 212 - 221
  • [5] 3D Background Modeling in Multi-view RGB-D Video
    Huang, Yung-Lin
    Wei, Ku-Chu
    Chien, Shao-Yi
    [J]. MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 1051 - 1054
  • [6] Real-time multi-view 3D object tracking in cluttered scenes
    Jin, Huan
    Qian, Gang
    Rajko, Stjepan
    [J]. ADVANCES IN VISUAL COMPUTING, PT 2, 2006, 4292 : 647 - 656
  • [7] TOWARDS REAL-TIME, MULTI-VIEW VIDEO STEREOPSIS
    Ke, Jianwei
    Watras, Alex J.
    Kim, Jae-Jun
    Liu, Hewei
    Jiang, Hongrui
    Hu, Yu Hen
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1638 - 1642
  • [8] Real-time view-interpolation system for super multi-view 3D display
    Hamaguchi, T
    Fujii, T
    Honda, T
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2003, E86D (01) : 109 - 116
  • [9] Web-based real-time multi-view 3D display system
    Park, YG
    Bae, KH
    Lee, ST
    Kim, ES
    [J]. THREE-DIMENSIONAL TV, VIDEO, AND DISPLAY II, 2003, 5243 : 147 - 157
  • [10] A Synchronized Multi-view System for Real-Time 3D Hand Pose Estimation
    Yu, Zhipeng
    Wang, Yangang
    [J]. ARTIFICIAL INTELLIGENCE, CICAI 2022, PT III, 2022, 13606 : 588 - 593