Learning Local Recurrent Models for Human Mesh Recovery

被引:3
|
作者
Li, Runze [1 ,2 ]
Karanam, Srikrishna [1 ]
Li, Ren [1 ]
Chen, Terrence [1 ]
Bhanu, Bir [2 ]
Wu, Ziyan [1 ]
机构
[1] United Imaging Intelligence, Cambridge, MA 02140 USA
[2] Univ Calif Riverside, Riverside, CA 92521 USA
关键词
D O I
10.1109/3DV53792.2021.00065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of estimating frame-level full human body meshes given a video of a person with natural motion dynamics. While much progress in this field has been in single image-based mesh estimation, there has been a recent uptick in efforts to infer mesh dynamics from video given its role in alleviating issues such as depth ambiguity and occlusions. However, a key limitation of existing work is the assumption that all the observed motion dynamics can be modeled using one dynamical/recurrent model. While this may work well in cases with relatively simplistic dynamics, inference with in-the-wild videos presents many challenges. In particular, it is typically the case that different body parts of a person undergo different dynamics in the video, e.g., legs may move in a way that may be dynamically different from hands (e.g., a person dancing). To address these issues, we present a new method for video mesh recovery that divides the human mesh into several local parts following the standard skeletal model. We then model the dynamics of each local part with separate recurrent models, with each model conditioned appropriately based on the known kinematic structure of the human body. This results in a structure-informed local recurrent learning architecture that can be trained in an end-to-end fashion with available annotations. We conduct a variety of experiments on standard video mesh recovery benchmark datasets such as Human3.6M, MPI-INF-3DHP, and 3DPW, demonstrating the efficacy of our design of modeling local dynamics as well as establishing state-of-the-art results based on standard evaluation metrics.
引用
收藏
页码:555 / 564
页数:10
相关论文
共 50 条
  • [1] Learning Analytical Posterior Probability for Human Mesh Recovery
    Fang, Qi
    Chen, Kang
    Fan, Yinghui
    Shuai, Qing
    Li, Jiefeng
    Zhang, Weidong
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8781 - 8791
  • [2] Learning Human Mesh Recovery in 3D Scenes
    Shen, Zehong
    Cen, Zhi
    Peng, Sida
    Shuai, Qing
    Bao, Hujun
    Zhou, Xiaowei
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 17038 - 17047
  • [3] Generative Approach for Probabilistic Human Mesh Recovery using Diffusion Models
    Cho, Hanbyel
    Kim, Junmo
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 4185 - 4190
  • [4] Recovery of Sharp Features in Mesh Models
    Liu Z.
    Pan M.
    Yang Z.
    Deng J.
    [J]. Communications in Mathematics and Statistics, 2015, 3 (2) : 263 - 283
  • [5] Occluded Human Mesh Recovery
    Khirodkar, Rawal
    Tripathi, Shashank
    Kitani, Kris
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1705 - 1715
  • [6] Probabilistic Modeling for Human Mesh Recovery
    Kolotouros, Nikos
    Pavlakos, Georgios
    Jayaraman, Dinesh
    Daniilidis, Kostas
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 11585 - 11594
  • [7] Recovery of an arbitrary edge on an existing surface mesh using local mesh modifications
    Karamete, BK
    Garimella, RV
    Shephard, MS
    [J]. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2001, 50 (06) : 1389 - 1409
  • [8] Deep learning for 3D human pose estimation and mesh recovery: A survey
    Liu, Yang
    Qiu, Changzhen
    Zhang, Zhiyong
    [J]. NEUROCOMPUTING, 2024, 596
  • [9] Learning with local models
    Rüing, S
    [J]. LOCAL PATTERN DETECTION, 2005, 3539 : 153 - 170
  • [10] Learning Spectral Dictionary for Local Representation of Mesh
    Gao, Zhongpai
    Yan, Junchi
    Zhai, Guangtao
    Yang, Xiaokang
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 685 - 692