Toward accurate real-time marker labeling for live optical motion capture

被引:2
|
作者
Shihong Xia
Le Su
Xinyu Fei
Han Wang
机构
[1] Chinese Academy of Sciences,Institute of Computing Technology
[2] University of Chinese Academy of Sciences,undefined
来源
The Visual Computer | 2017年 / 33卷
关键词
Motion capture; Marker labeling; Graph matching; Point correspondence;
D O I
暂无
中图分类号
学科分类号
摘要
Marker labeling plays an important role in optical motion capture pipeline especially in real-time applications; however, the accuracy of online marker labeling is still unclear. This paper presents a novel accurate real-time online marker labeling algorithm for simultaneously dealing with missing and ghost markers. We first introduce a soft graph matching model that automatically labels the markers by using Hungarian algorithm for finding the global optimal matching. The key idea is to formulate the problem in a combinatorial optimization framework. The objective function minimizes the matching cost, which simultaneously measures the difference of markers in the model and data graphs as well as their local geometrical structures consisting of edge constraints. To achieve high subsequent marker labeling accuracy, which may be influenced by limb occlusions or self-occlusions, we also propose an online high-quality full-body pose reconstruction process to estimate the positions of missing markers. We demonstrate the power of our approach by capturing a wide range of human movements and achieve the state-of-the-art accuracy by comparing against alternative methods and commercial system like VICON.
引用
收藏
页码:993 / 1003
页数:10
相关论文
共 50 条
  • [21] Online Marker Labeling for Fully Automatic Skeleton Tracking in Optical Motion Capture
    Meyer, Johannes
    Kuderer, Markus
    Mueller, Joerg
    Burgard, Wolfram
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 5652 - 5657
  • [22] From Motion Capture to Real-Time Character Animation
    Multon, Franck
    Kulpa, Richard
    Hoyet, Ludovic
    Komura, Taku
    [J]. MOTION IN GAMES, FIRST INTERNATIONAL WORKSHOP, MIG 2008, 2008, 5277 : 72 - +
  • [23] Towards Scalable and Real-time Markerless Motion Capture
    Albanis, Georgios
    Chatzitofis, Anargyros
    Thermos, Spyridon
    Zioulis, Nikolaos
    Kolomvatsos, Kostas
    [J]. 2022 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW 2022), 2022, : 715 - 716
  • [24] Development of real-time and highly accurate wireless motion capture system utilizing soft magnetic core
    Hashi, S
    Tokunaga, Y
    Yabukami, S
    Toyoda, M
    Ishiyama, K
    Okazaki, Y
    Arai, KI
    [J]. IEEE TRANSACTIONS ON MAGNETICS, 2005, 41 (10) : 4191 - 4193
  • [25] Graph Matching for Marker Labeling and Missing Marker Reconstruction With Bone Constraint by LSTM in Optical Motion Capture
    Li, Jianfang
    Xiao, Degui
    Li, Keqin
    Li, Jiazhi
    [J]. IEEE ACCESS, 2021, 9 : 34868 - 34881
  • [26] MIMIC: Real-time marker-free motion capture system to create an agent in the virtual space
    Kim, SE
    Lee, RH
    Park, CJ
    Lee, IH
    [J]. INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, VOLS I AND II, PROCEEDINGS, 2002, : 48 - 49
  • [27] Estimating human skeleton parameters and configuration in real-time from markered optical motion capture
    Cameron, Jonathan
    Lasenby, Joan
    [J]. ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS, 2008, 5098 : 92 - 101
  • [28] Real-time avatar animation steered by live body motion
    Schreer, O
    Tanger, R
    Eisert, P
    Kauff, P
    Kaspar, B
    Englert, R
    [J]. IMAGE ANALYSIS AND PROCESSING - ICIAP 2005, PROCEEDINGS, 2005, 3617 : 147 - 154
  • [29] MOVIN: Real-time Motion Capture using a Single LiDAR
    Jang, Deok-Kyeong
    Yang, Dongseok
    Jang, Deok-Yun
    Choi, Byeoli
    Jin, Taeil
    Lee, Sung-Hee
    [J]. COMPUTER GRAPHICS FORUM, 2023, 42 (07)
  • [30] Real-Time Human Motion Capture with Multiple Depth Cameras
    Shafaei, Alireza
    Little, James J.
    [J]. 2016 13TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2016, : 24 - 31