Concept Based Hybrid Fusion of Multimodal Event Signals

被引:0
|
作者
Wang, Yuhui [1 ]
von der Weth, Christian [2 ]
Zhang, Yehong [3 ]
Low, Kian Hsiang [3 ]
Singh, Vivek K. [4 ]
Kankanhalli, Mohan [3 ]
机构
[1] Natl Univ Singapore, NUS Grad Sch Integrat Sci & Engn, Singapore, Singapore
[2] Natl Univ Singapore, Interact & Digital Media Inst, SeSaMe Ctr, Singapore, Singapore
[3] Natl Univ Singapore, Dept Comp Sci, Singapore, Singapore
[4] Rutgers State Univ, Sch Commun & Informat, New Brunswick, NJ USA
关键词
multimodal fusion; situation understanding; multisensor data analysis; events; IMAGE FUSION;
D O I
10.1109/ISM.2016.64
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years have seen a significant increase in the number of sensors and resulting event related sensor data, allowing for a better monitoring and understanding of real-world events and situations. Event-related data come from not only physical sensors (e.g., CCTV cameras, webcams) but also from social or microblogging platforms (e.g., Twitter). Given the wide-spread availability of sensors, we observe that sensors of different modalities often independently observe the same events. We argue that fusing multimodal data about an event can be helpful for more accurate detection, localization and detailed description of events of interest. However, multimodal data often include noisy observations, varying information densities and heterogeneous representations, which makes the fusion a challenging task. In this paper, we propose a hybrid fusion approach that takes the spatial and semantic characteristics of sensor signals about events into account. For this, we first adopt the concept of an image-based representation that expresses the situation of particular visual concepts (e.g. "crowdedness", "people marching") called Cmage for both physical and social sensor data. Based on this Cmage representation, we model sparse sensor information using a Gaussian process, fuse multimodal event signals with a Bayesian approach, and incorporate spatial relations between the sensor and social observations. We demonstrate the effectiveness of our approach as a proof-of-concept over real-world data. Our early results show that the proposed approach can reliably reduce the sensor-related noise, locate the event place, improve event detection reliability, and add semantic context so that the fused data provides a better picture of the observed events.
引用
收藏
页码:14 / 19
页数:6
相关论文
共 50 条
  • [21] Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion
    Cimtay, Yucel
    Ekmekcioglu, Erhan
    Caglar-Ozhan, Seyma
    IEEE ACCESS, 2020, 8 : 168865 - 168878
  • [22] DWMF: A Method for Hybrid Multimodal Intent Fusion Based on Dynamic Weights
    Lv, Meng
    Feng, Zhiquan
    Yang, Xiaohui
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT X, ICIC 2024, 2024, 14871 : 247 - 260
  • [23] Sub-band-based feature fusion and hybrid fusion approaches for multimodal biometric identification
    Devi, D. V. Rajeshwari
    Kattamuri, Narasimha Rao
    INTERNATIONAL JOURNAL OF BIOMETRICS, 2020, 12 (04) : 357 - 376
  • [24] Multimodal concept fusion using semantic closeness for image concept disambiguation
    Ahmad Adel Abu-Shareha
    Rajeswari Mandava
    Latifur Khan
    Dhanesh Ramachandram
    Multimedia Tools and Applications, 2012, 61 : 69 - 86
  • [25] Multimodal concept fusion using semantic closeness for image concept disambiguation
    Abu-Shareha, Ahmad Adel
    Mandava, Rajeswari
    Khan, Latifur
    Ramachandram, Dhanesh
    MULTIMEDIA TOOLS AND APPLICATIONS, 2012, 61 (01) : 69 - 86
  • [26] Event Grounding from Multimodal Social Network Fusion
    Cho, Hyunsouk
    Yeo, Jinyoung
    Hwang, Seung-won
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 835 - 840
  • [27] Multimodal Physiological Signals Fusion for Online Emotion Recognition
    Pan, Tongjie
    Ye, Yalan
    Cai, Hecheng
    Huang, Shudong
    Yang, Yang
    Wang, Guoqing
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5879 - 5888
  • [28] Multimodal emotion recognition for the fusion of speech and EEG signals
    Ma J.
    Sun Y.
    Zhang X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2019, 46 (01): : 143 - 150
  • [29] Online penetration prediction based on multimodal continuous signals fusion of CMT for full penetration
    Gao, Peng
    Su, Xiaocong
    Wu, Zijian
    Lu, Jun
    Han, Jing
    Bai, Lianfa
    Zhao, Zhuang
    JOURNAL OF MANUFACTURING PROCESSES, 2024, 115 : 431 - 440
  • [30] Multimodal Fusion Approach Based on EEG and EMG Signals for Lower Limb Movement Recognition
    Al-Quraishi, Maged S.
    Elamvazuthi, Irraivan
    Tang, Tong Boon
    Al-Qurishi, Muhammad
    Parasuraman, S.
    Borboni, Alberto
    IEEE SENSORS JOURNAL, 2021, 21 (24) : 27640 - 27650