MULTI-MODAL LEARNING FOR GESTURE RECOGNITION

被引:0
|
作者
Cao, Congqi [1 ]
Zhang, Yifan [1 ]
Lu, Hanqing [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
关键词
multi-modality; gesture recognition; coupled hidden Markov model; AUDIOVISUAL EMOTION RECOGNITION;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
With the development of sensing equipments, data from different modalities is available for gesture recognition. In this paper, we propose a novel multi-modal learning framework. A coupled hidden Markov model (CHMM) is employed to discover the correlation and complementary information across different modalities. In this framework, we use two configurations: one is multi-modal learning and multi-modal testing, where all the modalities used during learning are still available during testing; the other is multi-modal learning and single-modal testing, where only one modality is available during testing. Experiments on two real-world gesture recognition data sets have demonstrated the effectiveness of our multi-modal learning framework. Improvements on both of the multi-modal and single-modal testing have been observed.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    YongXiang Zou
    Long Cheng
    LiJun Han
    ZhengWei Li
    [J]. Science China Technological Sciences, 2023, 66 : 3219 - 3230
  • [32] Multi-modal Gesture Recognition Using Skeletal Joints and Motion Trail Model
    Liang, Bin
    Zheng, Lihong
    [J]. COMPUTER VISION - ECCV 2014 WORKSHOPS, PT I, 2015, 8925 : 623 - 638
  • [33] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    Zou, Yongxiang
    Cheng, Long
    Han, Lijun
    Li, Zhengwei
    [J]. SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2023, 66 (11) : 3219 - 3230
  • [34] Multi-modal fusion for robust hand gesture recognition based on heterogeneous networks
    ZOU YongXiang
    CHENG Long
    HAN LiJun
    LI ZhengWei
    [J]. Science China(Technological Sciences)., 2023, 66 (11) - 3230
  • [35] Multi-modal gesture recognition with voting-based dynamic time warping
    Kuang, Yiqun
    Cheng, Hong
    Hao, Jiasheng
    Xie, Ruimeng
    Cui, Fang
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2019, 16 (06):
  • [36] Multi-modal Gesture Recognition using Integrated Model of Motion, Audio and Video
    GOUTSU Yusuke
    KOBAYASHI Takaki
    OBARA Junya
    KUSAJIMA Ikuo
    TAKEICHI Kazunari
    TAKANO Wataru
    NAKAMURA Yoshihiko
    [J]. Chinese Journal of Mechanical Engineering., 2015, 28 (04) - 665
  • [37] Multi-modal user interface combining eye tracking and hand gesture recognition
    Hansol Kim
    Kun Ha Suh
    Eui Chul Lee
    [J]. Journal on Multimodal User Interfaces, 2017, 11 : 241 - 250
  • [38] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    [J]. NEUROCOMPUTING, 2024, 570
  • [39] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    [J]. 2ND IEEE INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER CONTROL (ICACC 2010), VOL. 5, 2010, : 612 - 616
  • [40] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    [J]. 2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, : 720 - 723