Gesture Recognition and Multi-modal Fusion on a New Hand Gesture Dataset

被引:1
|
作者
Schak, Monika [1 ]
Gepperth, Alexander [1 ]
机构
[1] Fulda Univ Appl Sci, D-36037 Fulda, Germany
关键词
Hand gestures; Dataset; Multi-modal data; Data fusion; Sequence classification; Gesture recognition;
D O I
10.1007/978-3-031-24538-1_4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a baseline for gesture recognition using state-of-the-art sequence classifiers on a new freely available multi-modal dataset of free-hand gestures. The dataset consists of roughly 100,000 samples, grouped into six classes of typical and easy-to-learn hand gestures. The dataset was recorded using two independent sensors, allowing for experiments on multi-modal data fusion at several depth levels and allowing research on multi-modal fusion for early, intermediate, and late fusion techniques. Since the whole dataset was recorded by a single person we ensure a very high quality of data with little to no risk for incorrectly performed gestures. We show the results of our experiments on unimodal sequence classification using a LSTM as well as a CNN classifier. We also show that multi-modal fusion of all four modalities results in higher precision using late-fusion of the output layer of an LSTM classifier trained on a single modality. Finally, we demonstrate that it is possible to perform live gesture classification using an LSTM-based gesture classifier, showing that generalization to other persons performing the gestures is high.
引用
收藏
页码:76 / 97
页数:22
相关论文
共 50 条
  • [1] Gesture Recognition on a New Multi-Modal Hand Gesture Dataset
    Schak, Monika
    Gepperth, Alexander
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 122 - 131
  • [2] On Multi-modal Fusion for Freehand Gesture Recognition
    Schak, Monika
    Gepperth, Alexander
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 862 - 873
  • [3] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    ZOU YongXiang
    CHENG Long
    HAN LiJun
    LI ZhengWei
    [J]. Science China Technological Sciences, 2023, (11) : 3219 - 3230
  • [4] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    YongXiang Zou
    Long Cheng
    LiJun Han
    ZhengWei Li
    [J]. Science China Technological Sciences, 2023, 66 : 3219 - 3230
  • [5] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    Zou, Yongxiang
    Cheng, Long
    Han, Lijun
    Li, Zhengwei
    [J]. SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2023, 66 (11) : 3219 - 3230
  • [6] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    ZOU YongXiang
    CHENG Long
    HAN LiJun
    LI ZhengWei
    [J]. Science China(Technological Sciences)., 2023, 66 (11) - 3230
  • [7] Multi-modal Gesture Recognition Challenge 2013: Dataset and Results
    Escalera, Sergio
    Gonzalez, Jordi
    Baro, Xavier
    Reyes, Miguel
    Lopes, Oscar
    Guyon, Isabelle
    Athitsos, Vassilis
    Escalante, Hugo J.
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 445 - 452
  • [8] Convolutional Transformer Fusion Blocks for Multi-Modal Gesture Recognition
    Hampiholi, Basavaraj
    Jarvers, Christian
    Mader, Wolfgang
    Neumann, Heiko
    [J]. IEEE ACCESS, 2023, 11 : 34094 - 34103
  • [9] Analysis of Deep Fusion Strategies for Multi-modal Gesture Recognition
    Roitberg, Alina
    Pollert, Tim
    Haurilet, Monica
    Martin, Manuel
    Stiefelhagen, Rainer
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 198 - 206
  • [10] MULTI-MODAL LEARNING FOR GESTURE RECOGNITION
    Cao, Congqi
    Zhang, Yifan
    Lu, Hanqing
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,