Geometry-Incorporated Posing of a Full-Body Avatar From Sparse Trackers

被引:0
|
作者
Anvari, Taravat [1 ]
Park, Kyoungju [1 ]
机构
[1] Chung Ang Univ, Sch Comp Sci & Engn, Seoul, South Korea
关键词
3D human pose estimation; avatar; mixed reality; virtual reality;
D O I
10.1109/ACCESS.2023.3299323
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Accurately rendering a user's full body in a virtual environment is crucial for embodied mixed reality (MR) experiences. Conventional MR systems provide sparse trackers such as a headset and two hand-held controllers. Recent studies have intensively investigated learning methods to regress untracked joints from sparse trackers and have produced plausible poses in real time for MR applications. However, most studies have assumed that they either know the position of the root joint or constrain it, yielding stiff pelvis motions. This paper presents the first geometry-incorporated learning method to generate the position and rotation of all joints, including the root joint, from the head and hands information for a wide range of motions. We split the problem into identifying a reference frame and a pose inference with respect to the new reference frame. Our method defines an avatar frame by setting a non-joint as the origin and transforms joint data in a world coordinate system into the avatar coordinate system. Our learning builds on a propagating long short-term memory (LSTM) network exploiting prior knowledge of the kinematic chains and the previous time domain. The learned joints are transformed back to obtain the positions with respect to the world frame. In our experiments, our method achieves competitive accuracy and robustness with the state-of-the-art speed of approximately 130 fps on motion capture datasets and the wild tracking data obtained from commercial MR devices. Our experiments confirm that the proposed method is practically applicable to MR systems.
引用
收藏
页码:78858 / 78866
页数:9
相关论文
共 50 条
  • [31] Selfrionete: A Fingertip Force-Input Controller for Continuous Full-Body Avatar Manipulation and Diverse Haptic Interactions
    Hashimoto, Takeru
    Hirao, Yutaro
    PROCEEDINGS OF THE 37TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, USIT 2024, 2024,
  • [32] MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze
    Kratzer, Philipp
    Bihlmaier, Simon
    Midlagajni, Niteesh Balachandra
    Prakash, Rohit
    Toussaint, Marc
    Mainprice, Jim
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 367 - 373
  • [33] Learning and Recognition of Clothing Genres From Full-Body Images
    Hidayati, Shintami C.
    You, Chuang-Wen
    Cheng, Wen-Huang
    Hua, Kai-Lung
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (05) : 1647 - 1659
  • [34] Uncertainties in the Dose From Full-Body Airport Screening Reply
    Smith-Bindman, Rebecca
    Mehta, Pratik
    ARCHIVES OF INTERNAL MEDICINE, 2011, 171 (12) : 1130 - 1130
  • [35] View-Invariant Full-Body Gesture Recognition from Video
    Peng, Bo
    Qian, Gang
    Rajko, Stjephan
    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6, 2008, : 1372 - 1376
  • [36] PE-DLS: a novel method for performing real-time full-body motion reconstruction in VR based on Vive trackers
    Qiang Zeng
    Gang Zheng
    Qian Liu
    Virtual Reality, 2022, 26 : 1391 - 1407
  • [37] PE-DLS: a novel method for performing real-time full-body motion reconstruction in VR based on Vive trackers
    Zeng, Qiang
    Zheng, Gang
    Liu, Qian
    VIRTUAL REALITY, 2022, 26 (04) : 1391 - 1407
  • [38] Real-time Full-Body Motion Capture from Video and IMUs
    Malleson, Charles
    Volino, Marco
    Gilbert, Andrew
    Trumble, Matthew
    Collomosse, John
    Hilton, Adrian
    PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 449 - 457
  • [39] BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband
    Lim, Hyunchul
    Li, Yaxuan
    Dressa, Matthew
    Hu, Fang
    Kim, Jae Hoon
    Zhang, Ruidong
    Zhang, Cheng
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2022, 6 (03):
  • [40] BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband
    Lim, Hyunchul
    Li, Yaxuan
    Dressa, Matthew
    Hu, Fang
    Kim, Jae Hoon
    Zhang, Ruidong
    Zhang, Cheng
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2022, 6 (03)