MULTI-MODAL LEARNING FOR GESTURE RECOGNITION

被引:0
|
作者
Cao, Congqi [1 ]
Zhang, Yifan [1 ]
Lu, Hanqing [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
关键词
multi-modality; gesture recognition; coupled hidden Markov model; AUDIOVISUAL EMOTION RECOGNITION;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
With the development of sensing equipments, data from different modalities is available for gesture recognition. In this paper, we propose a novel multi-modal learning framework. A coupled hidden Markov model (CHMM) is employed to discover the correlation and complementary information across different modalities. In this framework, we use two configurations: one is multi-modal learning and multi-modal testing, where all the modalities used during learning are still available during testing; the other is multi-modal learning and single-modal testing, where only one modality is available during testing. Experiments on two real-world gesture recognition data sets have demonstrated the effectiveness of our multi-modal learning framework. Improvements on both of the multi-modal and single-modal testing have been observed.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition
    Fan, Dinghao
    Lu, Hengjie
    Xu, Shugong
    Cao, Shan
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (23) : 27026 - 27036
  • [2] On Multi-modal Fusion for Freehand Gesture Recognition
    Schak, Monika
    Gepperth, Alexander
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 862 - 873
  • [3] Adaptive cross-fusion learning for multi-modal gesture recognition
    Benjia ZHOU
    Jun WAN
    Yanyan LIANG
    Guodong GUO
    [J]. 虚拟现实与智能硬件(中英文), 2021, 3 (03) : 235 - 247
  • [4] ModDrop: Adaptive Multi-Modal Gesture Recognition
    Neverova, Natalia
    Wolf, Christian
    Taylor, Graham
    Nebout, Florian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (08) : 1692 - 1706
  • [5] Fusing Multi-modal Features for Gesture Recognition
    Wu, Jiaxiang
    Cheng, Jian
    Zhao, Chaoyang
    Lu, Hanqing
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 453 - 459
  • [6] Gesture Recognition on a New Multi-Modal Hand Gesture Dataset
    Schak, Monika
    Gepperth, Alexander
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 122 - 131
  • [7] Gesture Recognition and Multi-modal Fusion on a New Hand Gesture Dataset
    Schak, Monika
    Gepperth, Alexander
    [J]. PATTERN RECOGNITION APPLICATIONS AND METHODS, ICPRAM 2021, ICPRAM 2022, 2023, 13822 : 76 - 97
  • [8] Gesture recognition based on multi-modal feature weight
    Duan, Haojie
    Sun, Ying
    Cheng, Wentao
    Jiang, Du
    Yun, Juntong
    Liu, Ying
    Liu, Yibo
    Zhou, Dalin
    [J]. Concurrency and Computation: Practice and Experience, 2021, 33 (05)
  • [9] A Unified Framework for Multi-Modal Isolated Gesture Recognition
    Duan, Jiali
    Wan, Jun
    Zhou, Shuai
    Guo, Xiaoyuan
    Li, Stan Z.
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2018, 14 (01)
  • [10] Gesture recognition based on multi-modal feature weight
    Duan, Haojie
    Sun, Ying
    Cheng, Wentao
    Jiang, Du
    Yun, Juntong
    Liu, Ying
    Liu, Yibo
    Zhou, Dalin
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (05):