Gesture Recognition on a New Multi-Modal Hand Gesture Dataset

被引:1
|
作者
Schak, Monika [1 ]
Gepperth, Alexander [1 ]
机构
[1] Fulda Univ Appl Sci, D-36037 Fulda, Germany
关键词
Hand Gestures; Dataset; Multimodal Data; Data Fusion; Sequence Detection;
D O I
10.5220/0010982200003122
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a new large-scale multi-modal dataset for free-hand gesture recognition. The freely available dataset consists of 79,881 sequences, grouped into six classes representing typical hand gestures in human-machine interaction. Each sample contains four independent modalities (arriving at different frequencies) recorded from two independent sensors: a fixed 3D camera for video, audio and 3D, and a wearable acceleration sensor attached to the wrist. The gesture classes are specifically chosen with investigations on multi-modal fusion in mind. For example, two gesture classes can be distinguished mainly by audio, while the four others are not exhibiting audio signals - besides white noise. An important point concerning this dataset is that it is recorded from a single person. While this reduces variability somewhat, it virtually eliminates the risk of incorrectly performed gestures, thus enhancing the quality of the data. By implementing a simple LSTM-based gesture classifier in a live system, we can demonstrate that generalization to other persons is nevertheless high. In addition, we show the validity and internal consistency of the data by training LSTM and DNN classifiers relying on a single modality to high precision.
引用
收藏
页码:122 / 131
页数:10
相关论文
共 50 条
  • [1] Gesture Recognition and Multi-modal Fusion on a New Hand Gesture Dataset
    Schak, Monika
    Gepperth, Alexander
    [J]. PATTERN RECOGNITION APPLICATIONS AND METHODS, ICPRAM 2021, ICPRAM 2022, 2023, 13822 : 76 - 97
  • [2] Multi-modal Gesture Recognition Challenge 2013: Dataset and Results
    Escalera, Sergio
    Gonzalez, Jordi
    Baro, Xavier
    Reyes, Miguel
    Lopes, Oscar
    Guyon, Isabelle
    Athitsos, Vassilis
    Escalante, Hugo J.
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 445 - 452
  • [3] MULTI-MODAL LEARNING FOR GESTURE RECOGNITION
    Cao, Congqi
    Zhang, Yifan
    Lu, Hanqing
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,
  • [4] Multi-modal zero-shot dynamic hand gesture recognition
    Rastgoo, Razieh
    Kiani, Kourosh
    Escalera, Sergio
    Sabokrou, Mohammad
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 247
  • [5] On Multi-modal Fusion for Freehand Gesture Recognition
    Schak, Monika
    Gepperth, Alexander
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 862 - 873
  • [6] ModDrop: Adaptive Multi-Modal Gesture Recognition
    Neverova, Natalia
    Wolf, Christian
    Taylor, Graham
    Nebout, Florian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (08) : 1692 - 1706
  • [7] Fusing Multi-modal Features for Gesture Recognition
    Wu, Jiaxiang
    Cheng, Jian
    Zhao, Chaoyang
    Lu, Hanqing
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 453 - 459
  • [8] 2MLMD: Multi-modal Leap Motion Dataset for Home Automation Hand Gesture Recognition Systems
    Bhiri, Nahla Majdoub
    Ameur, Safa
    Jegham, Imen
    Alouani, Ihsen
    Ben Khalifa, Anouar
    [J]. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2024,
  • [9] Multi-modal user interface combining eye tracking and hand gesture recognition
    Kim, Hansol
    Suh, Kun Ha
    Lee, Eui Chul
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2017, 11 (03) : 241 - 250
  • [10] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    ZOU YongXiang
    CHENG Long
    HAN LiJun
    LI ZhengWei
    [J]. Science China Technological Sciences, 2023, (11) : 3219 - 3230