Real-time 3D Reconstruction in Dynamic Scenes using Point-based Fusion

被引:168
|
作者
Keller, Maik [1 ]
Lefloch, Damien [2 ]
Lambers, Martin [2 ]
Izadi, Shahram [3 ]
Weyrich, Tim [4 ]
Kolb, Andreas [2 ]
机构
[1] Pmdtechnologies, Siegen, Germany
[2] Univ Siegen, D-57068 Siegen, Germany
[3] Microsoft Res, Mountain View, CA USA
[4] UCL, London WC1E 6BT, England
关键词
CAMERAS;
D O I
10.1109/3DV.2013.9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Real-time or online 3D reconstruction has wide applicability and receives further interest due to availability of consumer depth cameras. Typical approaches use a moving sensor to accumulate depth measurements into a single model which is continuously refined. Designing such systems is an intricate balance between reconstruction quality, speed, spatial scale, and scene assumptions. Existing online methods either trade scale to achieve higher quality reconstructions of small objects/scenes. Or handle larger scenes by trading real-time performance and/or quality, or by limiting the bounds of the active reconstruction. Additionally, many systems assume a static scene, and cannot robustly handle scene motion or reconstructions that evolve to reflect scene changes. We address these limitations with a new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes. Our system is designed around a simple and flat point-based representation, which directly works with the input acquired from range/depth sensors, without the overhead of converting between representations. The use of points enables speed and memory efficiency, directly leveraging the standard graphics pipeline for all central operations; i.e., camera pose estimation, data association, outlier removal, fusion of depth maps into a single denoised model, and detection and update of dynamic objects. We conclude with qualitative and quantitative results that highlight robust tracking and high quality reconstructions of a diverse set of scenes at varying scales.
引用
收藏
页码:1 / 8
页数:8
相关论文
共 50 条
  • [1] Real-time 3D Reconstruction Using a Combination of Point-based and Volumetric Fusion
    Xia, Zhengyu
    Kim, Joohee
    Park, Young Soo
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 8449 - 8455
  • [2] Real-time streaming of point-based 3D video
    Lamboray, E
    Würmlin, S
    Gross, M
    [J]. IEEE VIRTUAL REALITY 2004, PROCEEDINGS, 2004, : 91 - +
  • [3] Real-time 3D reconstruction techniques applied in dynamic scenes: A systematic literature review
    Ingale, Anupama K.
    J., Divya Udayan
    [J]. Computer Science Review, 2021, 39
  • [4] Real-time 3D reconstruction techniques applied in dynamic scenes: A systematic literature review
    Ingale, Anupama K.
    Udayan, Divya J.
    [J]. COMPUTER SCIENCE REVIEW, 2021, 39
  • [5] Real-time fusion of multiple videos and 3D real scenes based on optimal viewpoint selection
    Wang, Shunli
    Hu, Qingwu
    Zhao, Pengcheng
    Yang, Honggang
    Wu, Xuan
    Ai, Mingyao
    Zhang, Xujie
    [J]. TRANSACTIONS IN GIS, 2023, 27 (01) : 198 - 223
  • [6] Real-time dynamic reflections for realistic rendering of 3D scenes
    Daniel Valente de Macedo
    Maria Andréia Formico Rodrigues
    [J]. The Visual Computer, 2018, 34 : 337 - 346
  • [7] Real-time dynamic reflections for realistic rendering of 3D scenes
    de Macedo, Daniel Valente
    Formico Rodrigues, Maria Andreia
    [J]. VISUAL COMPUTER, 2018, 34 (03): : 337 - 346
  • [8] Point-Based 3D Reconstruction of Thin Objects
    Ummenhofer, Benjamin
    Brox, Thomas
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 969 - 976
  • [9] Real-time terrain reconstruction using 3D flag map for point clouds
    Song, Wei
    Cho, Kyungeun
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (10) : 3459 - 3475
  • [10] Real-time terrain reconstruction using 3D flag map for point clouds
    Wei Song
    Kyungeun Cho
    [J]. Multimedia Tools and Applications, 2015, 74 : 3459 - 3475