The catchment feature model: A device for multimodal fusion and a bridge between signal and sense

被引:10
|
作者
Quek, F [1 ]
机构
[1] Virginia Polytech Inst & State Univ, Vis Interfaces & Syst Lab, Ctr Human Comp Interact, Blacksburg, VA 24061 USA
基金
美国国家科学基金会;
关键词
multimodal interaction; gesture interaction; multimodal communications; motion symmetries; gesture space use;
D O I
10.1155/S1110865704405101
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to "whole gesture" recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.
引用
收藏
页码:1619 / 1636
页数:18
相关论文
共 38 条
  • [1] The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense
    Francis Quek
    [J]. EURASIP Journal on Advances in Signal Processing, 2004
  • [2] The catchment feature model for multimodal language analysis
    Quek, F
    [J]. NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I AND II, PROCEEDINGS, 2003, : 540 - 547
  • [3] Mobile Signal Modulation Recognition Based on Multimodal Feature Fusion
    Zhuoran Cai
    Yuqian Li
    Qidi Wu
    [J]. Mobile Networks and Applications, 2022, 27 : 2469 - 2482
  • [4] Mobile Signal Modulation Recognition Based on Multimodal Feature Fusion
    Cai, Zhuoran
    Li, Yuqian
    Wu, Qidi
    [J]. MOBILE NETWORKS & APPLICATIONS, 2022, 27 (06): : 2469 - 2482
  • [5] Sense Adaptive Multimodal Information Fusion: A Proposed Model
    Bokhari, Mohammad Ubaidullah
    Hasan, Faraz
    [J]. PROCEEDINGS OF THE 10TH INDIACOM - 2016 3RD INTERNATIONAL CONFERENCE ON COMPUTING FOR SUSTAINABLE GLOBAL DEVELOPMENT, 2016, : 3976 - 3979
  • [6] Multimodal Feature Fusion Based Hypergraph Learning Model
    Yang, Zhe
    Xu, Liangkui
    Zhao, Lei
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [7] Time-Frequency Aliased Signal Identification Based on Multimodal Feature Fusion
    Zhang, Hailong
    Li, Lichun
    Pan, Hongyi
    Li, Weinian
    Tian, Siyao
    [J]. SENSORS, 2024, 24 (08)
  • [8] A novel signal to image transformation and feature level fusion for multimodal emotion recognition
    Yilmaz, Bahar Hatipoglu
    Kose, Cemal
    [J]. BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK, 2021, 66 (04): : 353 - 362
  • [9] Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal
    Han Zhiyan
    Wang Jian
    [J]. INTERNATIONAL SEMINAR ON APPLIED PHYSICS, OPTOELECTRONICS AND PHOTONICS (APOP 2016), 2016, 61
  • [10] MULTIMODAL FEATURE FUSION MODEL FOR RGB-D ACTION RECOGNITION
    Xu Weiyao
    Wu Muqing
    Zhao Min
    Xia Ting
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2021,