JS']JSL3D: Joint subspace learning with implicit structure supervision for 3D pose estimation

被引:4
|
作者
Jiang, Mengxi [1 ,2 ]
Zhou, Shihao [2 ]
Li, Cuihua [2 ]
Lei, Yunqi [2 ]
机构
[1] Fuzhou Univ, Sch Adv Mfg, Jinjiang 362251, Peoples R China
[2] Xiamen Univ, Dept Comp Sci, Xiamen 361005, Peoples R China
关键词
3D pose estimation; Sparse representation model; Implicit structure supervision; Joint subspace learning; SPARSE REPRESENTATION;
D O I
10.1016/j.patcog.2022.108965
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Estimating 3D human poses from a single image is an important task in computer graphics. Most model -based estimation methods represent the labeled/detected 2D poses and the projection of approximated 3D poses using vector representations of body joints. However, such lower-dimensional vector representa-tions fail to maintain the spatial relations of original body joints, because the representations do not con-sider the inherent structure of body joints. In this paper, we propose JSL3D , a novel joint subspace learn-ing approach with implicit structure supervision based on Sparse Representation (SR) model, capturing the latent spatial relations of 2D body joints by an end-to-end autoencoder network. JSL3D jointly com-bines the learned latent spatial relations and 2D joints as inputs for the standard SR inference frame. The optimization is simultaneously processed via geometric priors in both latent and original feature spaces. We have evaluated JSL3D using four large-scale and well-recognized benchmarks, including Human3.6M , HumanEva-I , CMU MoCap and MPII . The experiment results demonstrate the effectiveness of JSL3D .(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Ordinal Depth Supervision for 3D Human Pose Estimation
    Pavlakos, Georgios
    Zhou, Xiaowei
    Daniilidis, Kostas
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7307 - 7316
  • [2] Constructing Implicit 3D Shape Models for Pose Estimation
    Arie-Nachimson, Mica
    Basri, Ronen
    [J]. 2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, : 1341 - 1348
  • [3] MONOCULAR 3D HUMAN POSE ESTIMATION BY MULTIPLE HYPOTHESIS PREDICTION AND JOINT ANGLE SUPERVISION
    Panda, Aditya
    Mukherjee, Dipti Prasad
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3243 - 3247
  • [4] Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
    Niemeyer, Michael
    Mescheder, Lars
    Oechsle, Michael
    Geiger, Andreas
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3501 - 3512
  • [5] Learning to Infer Implicit Surfaces without 3D Supervision
    Liu, Shichen
    Saito, Shunsuke
    Chen, Weikai
    Li, Hao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Unsupervised 3D Pose Estimation with Geometric Self-Supervision
    Chen, Ching-Hang
    Tyagi, Ambrish
    Agrawal, Amit
    Drover, Dylan
    Rohith, M., V
    Stojanov, Stefan
    Rehg, James M.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5707 - 5717
  • [7] Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision
    Bao, Wenxia
    Ma, Zhongyu
    Liang, Dong
    Yang, Xianjun
    Niu, Tao
    [J]. SENSORS, 2023, 23 (06)
  • [8] 3D Human Pose Estimation With Adversarial Learning
    Meng, Wenming
    Hu, Tao
    Shuai, Li
    [J]. 2019 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV), 2019, : 93 - 99
  • [9] 3D DRIVER POSE ESTIMATION BASED ON JOINT 2D-3D NETWORK
    Yao, Zhijie
    Liu, Yazhou
    Ji, Zexuan
    Sun, Quansen
    Lasang, Pongsak
    Shen, Shengmei
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2546 - 2550
  • [10] 3D driver pose estimation based on joint 2D-3D network
    Yao, Zhijie
    Liu, Yazhou
    Ji, Zexuan
    Sun, Quansen
    Lasang, Pongsak
    Shen, Shengmei
    [J]. IET COMPUTER VISION, 2020, 14 (03) : 84 - 91