VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the Wild

被引:33
|
作者
Zhang, Yifu [1 ]
Wang, Chunyu [2 ]
Wang, Xinggang [1 ]
Liu, Wenyu [1 ]
Zeng, Wenjun [2 ]
机构
[1] Huazhong Univ Sci & Technol, Wuhan 430074, Peoples R China
[2] Microsoft Res Asia, Beijing 100080, Peoples R China
关键词
3D human pose tracking; volumetric; multiple camera views; NETWORK;
D O I
10.1109/TPAMI.2022.3163709
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present VoxelTrack for multi-person 3D pose estimation and tracking from a few cameras which are separated by wide baselines. It employs a multi-branch network to jointly estimate 3D poses and re-identification (Re-ID) features for all people in the environment. In contrast to previous efforts which require to establish cross-view correspondence based on noisy 2D pose estimates, it directly estimates and tracks 3D poses from a 3D voxel-based representation constructed from multi-view images. We first discretize the 3D space by regular voxels and compute a feature vector for each voxel by averaging the body joint heatmaps that are inversely projected from all views. We estimate 3D poses from the voxel representation by predicting whether each voxel contains a particular body joint. Similarly, a Re-ID feature is computed for each voxel which is used to track the estimated 3D poses over time. The main advantage of the approach is that it avoids making any hard decisions based on individual images. The approach can robustly estimate and track 3D poses even when people are severely occluded in some cameras. It outperforms the state-of-the-art methods by a large margin on four public datasets including Shelf, Campus, Human3.6 M and CMU Panoptic.
引用
下载
收藏
页码:2613 / 2626
页数:14
相关论文
共 50 条
  • [31] Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views
    Dong, Junting
    Jiang, Wen
    Huang, Qixing
    Bao, Hujun
    Zhou, Xiaowei
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7784 - 7793
  • [32] Single-shot 3D multi-person pose estimation in complex images
    Benzine, Abdallah
    Luvison, Bertrand
    Pham, Quoc Cuong
    Achard, Catherine
    PATTERN RECOGNITION, 2021, 112
  • [33] CRENet: Crowd region enhancement network for multi-person 3D pose estimation
    Li, Zhaokun
    Liu, Qiong
    IMAGE AND VISION COMPUTING, 2024, 151
  • [34] PI-Net: Pose Interacting Network for Multi-Person Monocular 3D Pose Estimation
    Guo, Wen
    Corona, Enric
    Moreno-Noguer, Francesc
    Alameda-Pineda, Xavier
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2795 - 2805
  • [35] Exploring Severe Occlusion: Multi-Person 3D Pose Estimation with Gated Convolution
    Gu, Renshu
    Wang, Gaoang
    Hwang, Jenq-Neng
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8243 - 8250
  • [36] MMDA: Multi-person marginal distribution awareness for monocular 3D pose estimation
    Liu, Sheng
    Shuai, Jianghai
    Li, Yang
    Du, Sidan
    IET IMAGE PROCESSING, 2023, 17 (07) : 2182 - 2191
  • [37] Single-Stage is Enough: Multi-Person Absolute 3D Pose Estimation
    Jin, Lei
    Xu, Chenyang
    Wang, Xiaojuan
    Xiao, Yabo
    Guo, Yandong
    Nie, Xuecheng
    Zhao, Jian
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13076 - 13085
  • [38] Unsupervised universal hierarchical multi-person 3D pose estimation for natural scenes
    Renshu Gu
    Zhongyu Jiang
    Gaoang Wang
    Kevin McQuade
    Jenq-Neng Hwang
    Multimedia Tools and Applications, 2022, 81 : 32883 - 32906
  • [39] Multi-Person 3D Pose and Shape Estimation via Inverse Kinematics and Refinement
    Cha, Junuk
    Saqlain, Muhammad
    Kim, GeonU
    Shin, Mingyu
    Baek, Seungryul
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 660 - 677
  • [40] Unsupervised universal hierarchical multi-person 3D pose estimation for natural scenes
    Gu, Renshu
    Jiang, Zhongyu
    Wang, Gaoang
    McQuade, Kevin
    Hwang, Jenq-Neng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (23) : 32883 - 32906