A Self-Training Approach for Visual Tracking and Recognition of Complex Human Activity Patterns

被引:0
|
作者
Jan Bandouch
Odest Chadwicke Jenkins
Michael Beetz
机构
[1] Technische Universität München,Intelligent Autonomous Systems Group
[2] Brown University,Department of Computer Science
关键词
Markerless human motion capture; Probabilistic state estimation; Self-trained models of human motion; Activity recognition;
D O I
暂无
中图分类号
学科分类号
摘要
Automatically observing and understanding human activities is one of the big challenges in computer vision research. Among the potential fields of application are areas such as robotics, human computer interaction or medical research. In this article we present our work on unintrusive observation and interpretation of human activities for the precise recognition of human fullbody motions. The presented system requires no more than three cameras and is capable of tracking a large spectrum of motions in a wide variety of scenarios. This includes scenarios where the subject is partially occluded, where it manipulates objects as part of its activities, or where it interacts with the environment or other humans. Our system is self-training, i.e. it is capable of learning models of human motion over time. These are used both to improve the prediction of human dynamics and to provide the basis for the recognition and interpretation of observed activities. The accuracy and robustness obtained by our system is the combined result of several contributions. By taking an anthropometric human model and optimizing it towards use in a probabilistic tracking framework we obtain a detailed biomechanical representation of human shape, posture and motion. Furthermore, we introduce a sophisticated hierarchical sampling strategy for tracking that is embedded in a probabilistic framework and outperforms state-of-the-art Bayesian methods. We then show how to track complex manipulation activities in everyday environments using a combination of learned human appearance models and implicit environment models. Finally, we discuss a locally consistent representation of human motion that we use as a basis for learning environment- and task-specific motion models. All methods presented in this article have been subject to extensive experimental evaluation on today’s benchmarks and several challenging sequences ranging from athletic exercises to ergonomic case studies to everyday manipulation tasks in a kitchen environment.
引用
收藏
页码:166 / 189
页数:23
相关论文
共 50 条
  • [1] A Self-Training Approach for Visual Tracking and Recognition of Complex Human Activity Patterns
    Bandouch, Jan
    Jenkins, Odest Chadwicke
    Beetz, Michael
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2012, 99 (02) : 166 - 189
  • [2] Domain Adaptation in Human Activity Recognition through Self-Training
    Al Kfari, Moh'd Khier
    Luedtke, Stefan
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 897 - 903
  • [3] SelfHAR: Improving Human Activity Recognition through Self-training with Unlabeled Data
    Tang C.I.
    Perez-Pozuelo I.
    Spathis D.
    Brage S.
    Wareham N.
    Mascolo C.
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2021, 5 (01)
  • [4] SelfHAR: Improving Human Activity Recognition through Self-training with Unlabeled Data
    Tang, Chi Ian
    Perez-Pozuelo, Ignacio
    Spathis, Dimitris
    Brage, Soren
    Wareham, Nick
    Mascolo, Cecilia
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2021, 5 (01):
  • [5] A pre-training and self-training approach for biomedical named entity recognition
    Gao, Shang
    Kotevska, Olivera
    Sorokine, Alexandre
    Christian, J. Blair
    PLOS ONE, 2021, 16 (02):
  • [6] PATTERN RECOGNITION IN SELF-TRAINING MODE
    LAPTEV, VA
    MILENKIY, AV
    ENGINEERING CYBERNETICS, 1966, (06): : 104 - &
  • [7] A Self-training Approach for Few-Shot Named Entity Recognition
    Qian, Yudong
    Zheng, Weiguo
    WEB AND BIG DATA, PT II, APWEB-WAIM 2022, 2023, 13422 : 183 - 191
  • [8] A Novel Self-training Approach for Low-resource Speech Recognition
    Singh, Satwinder
    Hou, Feng
    Wang, Ruili
    INTERSPEECH 2023, 2023, : 1588 - 1592
  • [9] Transductive Multi-Object Tracking in Complex Events by Interactive Self-Training
    Wu, Ancong
    Lin, Chengzhi
    Chen, Bogao
    Huang, Weihao
    Huang, Zeyu
    Zheng, Wei-Shi
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4620 - 4624
  • [10] ActiveSelfHAR: Incorporating Self-Training Into Active Learning to Improve Cross-Subject Human Activity Recognition
    Wei, Baichun
    Yi, Chunzhi
    Zhang, Qi
    Zhu, Haiqi
    Zhu, Jianfei
    Jiang, Feng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (04): : 6833 - 6847