Fast Generation of Spatiotemporal-Consistent Dynamic Atlas

被引:0
|
作者
Hong Y. [1 ]
Wu K. [1 ]
Li W. [1 ]
Hu W. [2 ]
Liu L. [1 ]
机构
[1] School of Mathematical Sciences, University of Science and Technology of China, Hefei
[2] HuaWei Technologies Co, Ltd, Beijing
来源
Liu, Ligang (lgliu@ustc.edu.cn) | 1600年 / Institute of Computing Technology卷 / 32期
关键词
Dynamic 3D reconstruction; Spatiotemporal-consistent atlas; Transmission of sequence of textured meshes;
D O I
10.3724/SP.J.1089.2020.17959
中图分类号
学科分类号
摘要
In order to transmit dynamic 3D reconstruction results, it is demanded to efficiently encode 3D models with realistic and compact texture information. In this paper, a method is proposed to fast generate spatiotemporal-consistent texture atlas for dynamic 3D reconstruction. First, the triangle-wise correspondence of adjacent meshes is quickly calculated as adjacent canonical meshes from dynamic reconstruction have similar geometric shape. Second, a consistent segmentation of sequence of meshes is obtained by solving a mesh-patch-based optimization. Third, the correspondence and segmentation of sequence of meshes are used for fast generating spatiotemporal-consistent atlas of meshes. Experimental results have shown that our method enables efficient dynamic 3D reconstruction with compact texture atlas video. The bitrate of atlas video lossless encoded by H.264 is reduced by about 50% and the bitrate of atlas video lossy encoded by H.264 is reduced by 70%~80%. © 2020, Beijing China Science Journal Publishing Co. Ltd. All right reserved.
引用
收藏
页码:709 / 720
页数:11
相关论文
共 25 条
  • [1] Jeon J, Jung Y, Kim H, Et al., Texture map generation for 3D reconstructed scenes, The Visual Computer, 32, 6-8, pp. 955-965, (2016)
  • [2] Bi S, Kalantari N K, Ramamoorthi R., Patch-based optimization for image-based texture mapping, ACM Transactions on Graphics, 36, 4, (2017)
  • [3] Li W, Gong H J, Yang R G., Fast texture mapping adjustment via local/global optimization, IEEE Transactions on Visualization and Computer Graphics, 25, 6, pp. 2296-2303, (2019)
  • [4] Fu Y P, Yan Q, Yang L, Et al., Texture mapping for 3D reconstruction with RGB-D sensor, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4645-4653, (2018)
  • [5] Newcombe R A, Izadi S, Hilliges O, Et al., KinectFusion: real-time dense surface mapping and tracking, Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136, (2011)
  • [6] Izadi S, Kim D, Hilliges O, Et al., KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 559-568, (2011)
  • [7] Curless B, Levoy M., A volumetric method for building complex models from range images, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 303-312, (1996)
  • [8] Newcombe R A, Fox D, Seitz S M., DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 343-352, (2015)
  • [9] Sumner R W, Schmid J, Pauly M., Embedded deformation for shape manipulation, ACM Transactions on Graphics, 26, 3, (2007)
  • [10] Innmann M, Zollhofer M, Niessner M, Et al., VolumeDeform: real-time volumetric non-rigid reconstruction, Proceedings of the European Conference on Computer Vision, pp. 362-379, (2016)