Neural Monocular 3D Human Motion Capture with Physical Awareness

被引:50
|
作者
Shimada, Soshi [1 ]
Golyanik, Vladislav [1 ]
Xu, Weipeng [2 ]
Perez, Patrick [3 ]
Theobalt, Christian [1 ]
机构
[1] Max Planck Inst Informat, Saarland Informat Campus, Saarbrucken, Germany
[2] Facebook Real Labs, Pittsburgh, PA USA
[3] Valeoai, Paris, France
来源
ACM TRANSACTIONS ON GRAPHICS | 2021年 / 40卷 / 04期
关键词
Monocular 3D Human Motion Capture; Physical Awareness; Global; 3D; Physionical Approach; POSE;
D O I
10.1145/3450626.3459825
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present a new trainable system for physically plausible markerless 3D human motion capture, which achieves state-of-the-art results in a broad range of challenging scenarios. Unlike most neural methods for human motion capture, our approach, which we dub "physionical", is aware of physical and environmental constraints. It combines in a fully-differentiable way several key innovations, i.e., 1) a proportional-derivative controller, with gains predicted by a neural network, that reduces delays even in the presence of fast motions, 2) an explicit rigid body dynamics model and 3) a novel optimisation layer that prevents physically implausible foot-floor penetration as a hard constraint. The inputs to our system are 2D joint keypoints, which are canonicalised in a novel way so as to reduce the dependency on intrinsic camera parameters-both at train and test time. This enables more accurate global translation estimation without generalisability loss. Our model can be finetuned only with 2D annotations when the 3D annotations are not available. It produces smooth and physically-principled 3D motions in an interactive frame rate in a wide variety of challenging scenes, including newly recorded ones. Its advantages are especially noticeable on in-the-wild sequences that significantly differ from common 3D pose estimation benchmarks such as Human 3.6M and MPI-INF-3DHP. Qualitative results are provided in the supplementary video.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Reconstruct 3D human motion from monocular video using motion library
    Wang, Wenzhong
    Qiu, Xianjie
    Wang, Zhaoqi
    Wang, Rongrong
    Li, Jintao
    [J]. ADVANCES IN MULTIMEDIA MODELING, PROCEEDINGS, 2008, 4903 : 242 - 252
  • [22] 3D Human Motion Capture Based on Neural Network and Triangular Gaussian Point Cloud
    You, Qing
    Chen, Wenjie
    Li, Ye
    [J]. PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 7481 - 7486
  • [23] Evaluating 3D Human Motion Capture on Mobile Devices
    Reimer, Lara Marie
    Kapsecker, Maximilian
    Fukushima, Takashi
    Jonas, Stephan M.
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (10):
  • [24] 3D Reconstruction of Human Motion and Skeleton from Uncalibrated Monocular Video
    Chen, Yen-Lin
    Chai, Jinxiang
    [J]. COMPUTER VISION - ACCV 2009, PT I, 2010, 5994 : 71 - 82
  • [25] Temporal motion models for monocular and multiview 3D human body tracking
    Urtasun, Raquel
    Fleet, David J.
    Fua, Pascal
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2006, 104 (2-3) : 157 - 177
  • [26] Towards robust 3D reconstruction of human motion from monocular video
    Chen, Cheng
    Zhuang, Yueting
    Xiao, Jun
    [J]. Advances in Artificial Reality and Tele-Existence, Proceedings, 2006, 4282 : 594 - 603
  • [27] Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers
    Li, Rui
    Yang, Ming-Hsuan
    Sclaroff, Stan
    Tian, Tai-Peng
    [J]. COMPUTER VISION - ECCV 2006, PT 2, PROCEEDINGS, 2006, 3952 : 137 - 150
  • [28] Monocular 3D Tracking of Articulated Human Motion in Silhouette and Pose Manifolds
    Feng Guo
    Gang Qian
    [J]. EURASIP Journal on Image and Video Processing, 2008
  • [29] Monocular human depth estimation with 3D motion flow and surface normals
    Li, Yuanzhen
    Luo, Fei
    Xiao, Chunxia
    [J]. VISUAL COMPUTER, 2023, 39 (08): : 3701 - 3713
  • [30] Monocular human depth estimation with 3D motion flow and surface normals
    Yuanzhen Li
    Fei Luo
    Chunxia Xiao
    [J]. The Visual Computer, 2023, 39 : 3701 - 3713