Human Motion Reconstruction and Synthesis of Human Skills

被引:17
|
作者
Demircan, Emel [1 ]
Besier, Thor [2 ]
Menon, Samir [1 ]
Khatib, Oussama [1 ]
机构
[1] Stanford Univ, Artificial Intelligence Lab, Stanford, CA 94305 USA
[2] Stanford Univ, Human Performance Lab, Stanford, CA 94305 USA
关键词
Motion reconstruction; marker space control; musculoskeletal model; human motion synthesis; KNEE-JOINT;
D O I
10.1007/978-90-481-9262-5_30
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reconstructing human motion dynamics in real-time is a challenging problem since it requires accurate motion sensing, subject specific models, and efficient reconstruction algorithms. A promising approach is to construct accurate human models, and control them to behave the same way the subject does. Here, we demonstrate that the whole-body control approach can efficiently reconstruct a subject's motion dynamics in real world task-space when given a scaled model and marker based motion capture data. We scaled a biomechanically realistic musculoskeletal model to a subject, captured motion with suitably placed markers, and used an operational space controller to directly track the motion of the markers with the model. Our controller tracked the positions, velocities, and accelerations of many markers in parallel by assigning them to tasks with different priority levels based on how free their parent limbs were. We executed lower priority marker tracking tasks in the successive null spaces of the higher priority tasks to resolve their interdependencies. The controller accurately reproduced the subject's full body dynamics while executing a throwing motion in near real time. Its reconstruction closely matched the marker data, and its performance was consistent for the entire motion. Our findings suggest that the direct marker tracking approach is an attractive tool to reconstruct and synthesize the dynamic motion of humans and other complex articulated body systems in a computationally efficient manner.
引用
收藏
页码:283 / +
页数:3
相关论文
共 50 条
  • [21] Reconstruction of Human Skills by Using PCA and Transferring them to a Robot
    Takeuchi, Masahiro
    Shimodaira, Jun
    Amaoka, Yuki
    Hamatani, Shinsuke
    Hirai, Hiroaki
    [J]. JOURNAL OF ROBOTICS AND MECHATRONICS, 2014, 26 (01) : 51 - 58
  • [22] Human motion synthesis by motion manifold learning and motion primitive segmentation
    Lee, Chan-Su
    Elgammal, Ahmed
    [J]. ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS, 2006, 4069 : 464 - 473
  • [23] Remarks on Markerless Human Motion Capture from Voxel Reconstruction with Simple Human Model
    Takahashi, Kazuhiko
    Nagasawa, Yusuke
    Hashimoto, Masafumi
    [J]. 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, 2008, : 755 - 760
  • [24] Human Motion Synthesis Using Trigonometric Splines
    Zhakatayev, Altay
    Avazov, Nurilla
    Rogovchenko, Yuriy
    Patzold, Matthias
    [J]. IEEE ACCESS, 2023, 11 : 14293 - 14308
  • [25] Efficient synthesis of physically valid human motion
    Fang, AC
    Pollard, NS
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2003, 22 (03): : 417 - 426
  • [26] Smoothed Surface Transitions for Human Motion Synthesis
    Doshi, Ashish
    [J]. 2014 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND MULTIMEDIA APPLICATIONS (SIGMAP), 2014, : 73 - 79
  • [27] Human motion signatures: Analysis, synthesis, recognition
    Vasilescu, MAO
    [J]. 16TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL III, PROCEEDINGS, 2002, : 456 - 460
  • [28] Independent component analysis and synthesis of human motion
    Mori, H
    Hoshino, J
    [J]. 2002 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I-IV, PROCEEDINGS, 2002, : 3564 - 3567
  • [29] Robotics-based synthesis of human motion
    Khatib, O.
    Demircan, E.
    De Sapio, V.
    Sentis, L.
    Besier, T.
    Delp, S.
    [J]. JOURNAL OF PHYSIOLOGY-PARIS, 2009, 103 (3-5) : 211 - 219
  • [30] Synthesis of human motion using Kalman filter
    Sul, CW
    Jung, SK
    Wohn, K
    [J]. MODELLING AND MOTION CAPTURE TECHNIQUES FOR VIRTUAL ENVIRONMENTS, 1998, 1537 : 100 - 112