Generative Model-Based Loss to the Rescue: A Method to Overcome Annotation Errors for Depth-Based Hand Pose Estimation

被引:2
|
作者
Wang, Jiayi [1 ]
Mueller, Franziska [1 ]
Bernard, Florian [1 ]
Theobalt, Christian [1 ]
机构
[1] Max Planck Inst Informat, Saarbrcken, Germany
关键词
D O I
10.1109/FG47880.2020.00013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose to use a model-based generative loss for training hand pose estimators on depth images based on a volumetric hand model. This additional loss allows training of a hand pose estimator that accurately infers the entire set of 21 hand keypoints while only using supervision for 6 easy-to-annotate keypoints (fingertips and wrist). We show that our partially-supervised method achieves results that are comparable to those of fully-supervised methods which enforce articulation consistency. Moreover, for the first time we demonstrate that such an approach can be used to train on datasets that have erroneous annotations, i.e. "ground truth" with notable measurement errors, while obtaining predictions that explain the depth images better than the given "ground truth".
引用
收藏
页码:101 / 108
页数:8
相关论文
共 50 条
  • [1] Depth-based hand pose estimation: data, methods, and challenges
    Supancic, James Steven, III
    Rogez, Gregory
    Yang, Yi
    Shotton, Jamie
    Ramanan, Deva
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1868 - 1876
  • [2] Depth-Based Hand Pose Estimation: Methods, Data, and Challenges
    Supancic, James Steven, III
    Rogez, Gregory
    Yang, Yi
    Shotton, Jamie
    Ramanan, Deva
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2018, 126 (11) : 1180 - 1198
  • [3] Depth-Based Hand Pose Estimation: Methods, Data, and Challenges
    James Steven Supančič
    Grégory Rogez
    Yi Yang
    Jamie Shotton
    Deva Ramanan
    International Journal of Computer Vision, 2018, 126 : 1180 - 1198
  • [4] Augmented Skeleton Space Transfer for Depth-based Hand Pose Estimation
    Baek, Seungryul
    Kim, Kwang In
    Kim, Tae-Kyun
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8330 - 8339
  • [5] TriHorn-Net: A model for accurate depth-based 3D hand pose estimation
    Rezaei, Mohammad
    Rastgoo, Razieh
    Athitsos, Vassilis
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 223
  • [6] A Generative Model for Depth-based Robust 3D Facial Pose Tracking
    Sheng, Lu
    Cai, Jianfei
    Cham, Tat-Jen
    Pavlovic, Vladimir
    Ngan, King Ngi
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4598 - 4607
  • [7] Depth-based Hand Pose Segmentation with Hough Random Forest
    Tsai, Wei-Jiun
    Chen, Ju-Chin
    Lin, Kawuu W.
    2016 3RD INTERNATIONAL CONFERENCE ON GREEN TECHNOLOGY AND SUSTAINABLE DEVELOPMENT (GTSD), 2016, : 166 - 167
  • [8] Depth-based 3D Hand Pose Tracking
    Quach, Kha Gia
    Chi Nhan Duong
    Luu, Khoa
    Bui, Tien D.
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2746 - 2751
  • [9] Depth-Based Efficient PnP: A Rapid and Accurate Method for Camera Pose Estimation
    Xie, Xinyue
    Zou, Deyue
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (11): : 9287 - 9294
  • [10] Visibility Constrained Generative Model for Depth-Based 3D Facial Pose Tracking
    Sheng, Lu
    Cai, Jianfei
    Cham, Tat-Jen
    Pavlovic, Vladimir
    Ngan, King Ngi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (08) : 1994 - 2007