DBMHT: A double-branch multi-hypothesis transformer for 3D human pose estimation in video

被引:0
|
作者
Xiang, Xuezhi [1 ,2 ]
Li, Xiaoheng [1 ]
Bao, Weijie [1 ]
Qiaoa, Yulong [1 ,3 ]
El Saddik, Abdulmotaleb [3 ]
机构
[1] Harbin Engn Univ, Sch Informat & Commun Engn, Harbin 150001, Peoples R China
[2] Minist Ind & Informat Technol, Key Lab Adv Marine Commun & Informat Technol, Harbin 150001, Peoples R China
[3] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON K1N 6N5, Canada
基金
黑龙江省自然科学基金; 中国国家自然科学基金;
关键词
3D human pose estimation; Transformer; Dual-branch; Cross-hypothesis;
D O I
10.1016/j.cviu.2024.104147
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The estimation of 3D human poses from monocular videos presents a significant challenge. The existing methods face the problems of deep ambiguity and self-occlusion. To overcome these problems, we propose a Double-Branch Multi-Hypothesis Transformer (DBMHT). In detail, we utilize a Double-Branch architecture to capture temporal and spatial information and generate multiple hypotheses. To merge these hypotheses, we adopt a lightweight module to integrate spatial and temporal representations. The DBMHT can not only capture spatial information from each joint in the human body and temporal information from each frame in the video but also merge multiple hypotheses that have different spatio-temporal information. Comprehensive evaluation on two challenging datasets (i.e. Human3.6M and MPI-INF-3DHP) demonstrates the superior performance of DBMHT, marking it as a robust and efficient approach for accurate 3D HPE in dynamic scenarios. The results show that our model surpasses the state-of-the-art approach by 1.9% MPJPE with ground truth 2D keypoints as input.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Frame-Padded Multiscale Transformer for Monocular 3D Human Pose Estimation
    Zhong, Yuanhong
    Yang, Guangxia
    Zhong, Daidi
    Yang, Xun
    Wang, Shanshan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6191 - 6201
  • [42] STRFormer: Spatial-Temporal-ReTemporal Transformer for 3D human pose estimation
    Liu, Xing
    Tang, Hao
    IMAGE AND VISION COMPUTING, 2023, 140
  • [43] Transformer-based 3D Human pose estimation and action achievement evaluation
    Yang, Aolei
    Zhou, Yinghong
    Yang, Banghua
    Xu, Yulin
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2024, 45 (04): : 136 - 144
  • [44] MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network
    Mehraban, Soroush
    Adeli, Vida
    Taati, Babak
    Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, : 6905 - 6915
  • [45] Adapted human pose: monocular 3D human pose estimation with zero real 3D pose data
    Liu, Shuangjun
    Sehgal, Naveen
    Ostadabbas, Sarah
    APPLIED INTELLIGENCE, 2022, 52 (12) : 14491 - 14506
  • [46] Adapted human pose: monocular 3D human pose estimation with zero real 3D pose data
    Shuangjun Liu
    Naveen Sehgal
    Sarah Ostadabbas
    Applied Intelligence, 2022, 52 : 14491 - 14506
  • [47] Multi-Person Absolute 3D Pose and Shape Estimation from Video
    Zhang, Kaifu
    Li, Yihui
    Guan, Yisheng
    Xi, Ning
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT III, 2021, 13015 : 189 - 200
  • [48] On the Robustness of 3D Human Pose Estimation
    Chen, Zerui
    Huang, Yan
    Wang, Liang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5326 - 5332
  • [49] SlowFastFormer for 3D human pose estimation
    Zhou, Lu
    Chen, Yingying
    Wang, Jinqiao
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 243
  • [50] Overview of 3D Human Pose Estimation
    Lin, Jianchu
    Li, Shuang
    Qin, Hong
    Wang, Hongchang
    Cui, Ning
    Jiang, Qian
    Jian, Haifang
    Wang, Gongming
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 134 (03): : 1621 - 1651