Temporal Action Localization for Inertial-based Human Activity Recognition

被引:0
|
作者
Bock, Marius [1 ]
Moeller, Michael [2 ]
VAN Laerhoven, Kristof [3 ]
机构
[1] Univ Siegen, Ubiquitous Comp, Comp Vis, Siegen, Germany
[2] Univ Siegen, Comp Vis, Siegen, Germany
[3] Univ Siegen, Ubiquitous Comp, Siegen, Germany
关键词
Deep Learning; Inertial-based Human Activity Recognition; Body-worn Sensors; Temporal Action Localization;
D O I
10.1145/3699770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As of today, state-of-the-art activity recognition from wearable sensors relies on algorithms being trained to classify fixed windows of data. In contrast, video-based Human Activity Recognition, known as Temporal Action Localization (TAL), has followed a segment-based prediction approach, localizing activity segments in a timeline of arbitrary length. This paper is the first to systematically demonstrate the applicability of state-of-the-art TAL models for both offline and near-online Human Activity Recognition (HAR) using raw inertial data as well as pre-extracted latent features as input. Offline prediction results show that TAL models are able to outperform popular inertial models on a multitude of HAR benchmark datasets, with improvements reaching as much as 26% in F1-score. We show that by analyzing timelines as a whole, TAL models can produce more coherent segments and achieve higher NULL-class accuracy across all datasets. We demonstrate that TAL is less suited for the immediate classification of small-sized windows of data, yet offers an interesting perspective on inertial-based HAR - alleviating the need for fixed-size windows and enabling algorithms to recognize activities of arbitrary length. With design choices and training concepts yet to be explored, we argue that TAL architectures could be of significant value to the inertial-based HAR community. The code and data download to reproduce experiments is publicly available via github.com/mariusbock/tal_for_har.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Boosting Inertial-Based Human Activity Recognition With Transformers
    Shavit, Yoli
    Klein, Itzik
    IEEE ACCESS, 2021, 9 : 53540 - 53547
  • [2] On the Homogenization of Heterogeneous Inertial-based Databases for Human Activity Recognition
    Ferrari, Anna
    Mobilio, Marco
    Micucci, Daniela
    Napoletano, Paolo
    2019 IEEE WORLD CONGRESS ON SERVICES (IEEE SERVICES 2019), 2019, : 295 - 300
  • [3] Tag Localization with Asynchronous Inertial-Based Shifting and Trilateration
    Alma'aitah, Abdallah Y.
    Eslim, Lobna M.
    Hassanein, Hossam S.
    SENSORS, 2019, 19 (23)
  • [4] Inertial-Based Localization for Unmanned Helicopters Against GNSS Outage
    Lau, Tak Kit
    Liu, Yun-Hui
    Lin, Kai Wun
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2013, 49 (03) : 1932 - 1949
  • [5] Efficient LiDAR/inertial-based localization with prior map for autonomous robots
    Song, Jian
    Chen, Yutian
    Liu, Xun
    Zheng, Nan
    INTELLIGENT SERVICE ROBOTICS, 2024, 17 (02) : 119 - 133
  • [6] Efficient LiDAR/inertial-based localization with prior map for autonomous robots
    Jian Song
    Yutian Chen
    Xun Liu
    Nan Zheng
    Intelligent Service Robotics, 2024, 17 : 119 - 133
  • [7] Temporal Approaches for Human Activity Recognition using Inertial Sensors
    Garcia, Felipe Aparecido
    Ranieri, Caetano Mazzoni
    Romero, Roseli A. F.
    2019 LATIN AMERICAN ROBOTICS SYMPOSIUM, 2019 BRAZILIAN SYMPOSIUM ON ROBOTICS (SBR) AND 2019 WORKSHOP ON ROBOTICS IN EDUCATION (LARS-SBR-WRE 2019), 2019, : 121 - 125
  • [8] Temporal Superpixels based Human Action Localization
    Ullah, Sami
    Hassan, Najmul
    Bhatti, Naeem
    2018 14TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET), 2018,
  • [9] Spatio-Temporal Action Localization for Human Action Recognition in Large Dataset
    Megrhi, Sameh
    Jmal, Marwa
    Beghdadi, Azeddine
    Mseddi, Wided
    VIDEO SURVEILLANCE AND TRANSPORTATION IMAGING APPLICATIONS 2015, 2015, 9407
  • [10] YOLO based Human Action Recognition and Localization
    Shinde, Shubham
    Kothari, Ashwin
    Gupta, Vikram
    INTERNATIONAL CONFERENCE ON ROBOTICS AND SMART MANUFACTURING (ROSMA2018), 2018, 133 : 831 - 838