PoseAugment: Generative Human Pose Data Augmentation with Physical Plausibility for IMU-Based Motion Capture

被引:0
|
作者
Li, Zhuojun [1 ,2 ]
Yu, Chun [1 ,2 ,3 ]
Liang, Chen [1 ,2 ]
Shi, Yuanchun [1 ,2 ,3 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Minist Educ, Key Lab Pervas Comp, Beijing, Peoples R China
[3] Qinghai Univ, Xining, Peoples R China
来源
关键词
D O I
10.1007/978-3-031-73411-3_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The data scarcity problem is a crucial factor that hampers the model performance of IMU-based human motion capture. However, effective data augmentation for IMU-based motion capture is challenging, since it has to capture the physical relations and constraints of the human body, while maintaining the data distribution and quality. We propose PoseAugment, a novel pipeline incorporating VAE-based pose generation and physical optimization. Given a pose sequence, the VAE module generates infinite poses with both high fidelity and diversity, while keeping the data distribution. The physical module optimizes poses to satisfy physical constraints with minimal motion restrictions. High-quality IMU data are then synthesized from the augmented poses for training motion capture models. Experiments show that PoseAugment outperforms previous data augmentation and pose generation methods in terms of motion capture accuracy, revealing a strong potential of our method to alleviate the data collection burden for IMU-based motion capture and related tasks driven by human poses.
引用
收藏
页码:55 / 73
页数:19
相关论文
共 50 条
  • [31] Design and Development of a Real-Time, Low-Cost IMU Based Human Motion Capture System
    Raghavendra, P.
    Sachin, M.
    Srinivas, P. S.
    Talasila, Viswanath
    COMPUTING AND NETWORK SUSTAINABILITY, 2017, 12 : 155 - 165
  • [32] Physical constitution adjustment for a human body model used in motion capture data analysis
    Miura, Takeshi
    Kaiga, Takaaki
    Shibata, Takeshi
    Tajima, Katsubumi
    Tamamoto, Hideo
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2016, 11 : S140 - S141
  • [33] HUMAN IDENTIFICATION BASED ON TENSOR REPRESENTATION OF THE GAIT MOTION CAPTURE DATA
    Josinski, Henryk
    Switonski, Adam
    Jedrasiak, Karol
    Kostrzewa, Daniel
    IAENG TRANSACTIONS ON ELECTRICAL ENGINEERING, VOL 1, 2012, : 111 - 122
  • [34] Keyframe extraction for human motion capture data based on affinity propagation
    Sun, Bin
    Kong, Dehui
    Wang, Shaofan
    Li, Jinghua
    2018 IEEE 9TH ANNUAL INFORMATION TECHNOLOGY, ELECTRONICS AND MOBILE COMMUNICATION CONFERENCE (IEMCON), 2018, : 107 - 112
  • [35] Motion capture as Data Source for Gait-based Human Identification
    Josinski, Henryk
    Switonski, Adam
    Michalczuk, Agnieszka
    Wojciechowski, Konrad
    PRZEGLAD ELEKTROTECHNICZNY, 2012, 88 (12B): : 201 - 204
  • [36] Reordering-based Transform for Compressing Human Motion Capture Data
    Hou, Junhui
    Chau, Lap-Pui
    He, Ying
    Magnenat-Thalmann, Nadia
    2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2015, : 2740 - 2743
  • [37] Motion Capture Research: 3D Human Pose Recovery Based on RGB Video Sequences
    Min, Xin
    Sun, Shouqian
    Wang, Honglie
    Zhang, Xurui
    Li, Chao
    Zhang, Xianfu
    APPLIED SCIENCES-BASEL, 2019, 9 (17):
  • [38] ActivityGAN: Generative Adversarial Networks for Data Augmentation in Sensor-Based Human Activity Recognition
    Li, Xi'ang
    Luo, Jinqi
    Younes, Rabih
    UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, : 249 - 254
  • [39] 3D human pose data augmentation using Generative Adversarial Networks for robotic-assisted movement quality assessment
    Wang, Xuefeng
    Mi, Yang
    Zhang, Xiang
    FRONTIERS IN NEUROROBOTICS, 2024, 18
  • [40] Human Motion Attitude Tracking Method Based on Vicon Motion Capture Under Big Data
    Liu, Ze-guo
    ADVANCED HYBRID INFORMATION PROCESSING, ADHIP 2019, PT I, 2019, 301 : 383 - 391