MOVIN: Real-time Motion Capture using a Single LiDAR

被引:0
|
作者
Jang, Deok-Kyeong [1 ,2 ]
Yang, Dongseok [1 ,2 ]
Jang, Deok-Yun [1 ,3 ]
Choi, Byeoli [1 ,2 ]
Jin, Taeil [2 ]
Lee, Sung-Hee [2 ]
机构
[1] MOVIN Inc, Santa Clara, CA USA
[2] Korea Adv Inst Sci & Technol KAIST, Daejeon, South Korea
[3] Gwangju Inst Sci & Technol GIST, Gwangju, South Korea
关键词
<bold>CCS Concepts</bold>; center dot <bold>Computing methodologies</bold> -> <bold>Motion capture</bold>; <bold>Motion processing</bold>; <bold>Neural networks</bold>; HUMAN POSE ESTIMATION;
D O I
10.1111/cgf.14961
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recent advancements in technology have brought forth new forms of interactive applications, such as the social metaverse, where end users interact with each other through their virtual avatars. In such applications, precise full-body tracking is essential for an immersive experience and a sense of embodiment with the virtual avatar. However, current motion capture systems are not easily accessible to end users due to their high cost, the requirement for special skills to operate them, or the discomfort associated with wearable devices. In this paper, we present MOVIN, the data-driven generative method for real-time motion capture with global tracking, using a single LiDAR sensor. Our autoregressive conditional variational autoencoder (CVAE) model learns the distribution of pose variations conditioned on the given 3D point cloud from LiDAR. As a central factor for high-accuracy motion capture, we propose a novel feature encoder to learn the correlation between the historical 3D point cloud data and global, local pose features, resulting in effective learning of the pose prior. Global pose features include root translation, rotation, and foot contacts, while local features comprise joint positions and rotations. Subsequently, a pose generator takes into account the sampled latent variable along with the features from the previous frame to generate a plausible current pose. Our framework accurately predicts the performer's 3D global information and local joint details while effectively considering temporally coherent movements across frames. We demonstrate the effectiveness of our architecture through quantitative and qualitative evaluations, comparing it against state-of-the-art methods. Additionally, we implement a real-time application to showcase our method in real-world scenarios.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Constrained inverse kinematics technique for real-time motion capture animation
    Tang, Wen
    Cavazza, Marc
    Mountain, Dale
    Earnshaw, Rae
    Visual Computer, 1999, 15 (07): : 413 - 425
  • [42] Real-Time Prediction of Joint Forces by Motion Capture and Machine Learning
    Giarmatzis, Georgios
    Zacharaki, Evangelia, I
    Moustakas, Konstantinos
    SENSORS, 2020, 20 (23) : 1 - 19
  • [43] Real-Time Human Motion Capture Driven by a Wireless Sensor Network
    Chen, Peng-zhan
    Li, Jie
    Luo, Man
    Zhu, Nian-hua
    INTERNATIONAL JOURNAL OF COMPUTER GAMES TECHNOLOGY, 2015, 2015
  • [44] A real-time interaction strategy for virtual maintenance based on motion capture
    Deng, Gangfeng
    Huang, Xianxiang
    Gao, Qinhe
    Zhu, Quanmin
    Zhang, Zhili
    Zhan, Ying
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2014, 49 (3-4) : 332 - 339
  • [45] HybridFusion: Real-Time Performance Capture Using a Single Depth Sensor and Sparse IMUs
    Zheng, Zerong
    Yu, Tao
    Li, Hao
    Guo, Kaiwen
    Dai, Qionghai
    Fang, Lu
    Liu, Yebin
    COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 : 389 - 406
  • [46] Real-time Human Action Recognition From Motion Capture Data
    Vantigodi, Suraj
    Babu, R. Venkatesh
    2013 FOURTH NATIONAL CONFERENCE ON COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING AND GRAPHICS (NCVPRIPG), 2013,
  • [47] Motion2Fusion: Real-time Volumetric Performance Capture
    Dou, Mingsong
    Davidson, Philip
    Fanello, Sean Ryan
    Khamis, Sameh
    Kowdle, Adarsh
    Rhemann, Christoph
    Tankovich, Vladimir
    Izadi, Shahram
    ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (06):
  • [48] Multiresolution coding of motion capture data for real-time multimedia applications
    Khan, Murtaza Ali
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (15) : 16683 - 16698
  • [49] Evaluating movement qualities with visual feedback for real-time motion capture
    Hussain, Aishah
    Modekjaer, Camilla
    Austad, Nicoline Warming
    Dahl, Sofia
    Erkut, Cumhur
    PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON MOVEMENT AND COMPUTING MOCO'19, 2019,
  • [50] Real-time marker prediction and CoR estimation in optical motion capture
    Andreas Aristidou
    Joan Lasenby
    The Visual Computer, 2013, 29 : 7 - 26