Concept Based Hybrid Fusion of Multimodal Event Signals

被引:0
|
作者
Wang, Yuhui [1 ]
von der Weth, Christian [2 ]
Zhang, Yehong [3 ]
Low, Kian Hsiang [3 ]
Singh, Vivek K. [4 ]
Kankanhalli, Mohan [3 ]
机构
[1] Natl Univ Singapore, NUS Grad Sch Integrat Sci & Engn, Singapore, Singapore
[2] Natl Univ Singapore, Interact & Digital Media Inst, SeSaMe Ctr, Singapore, Singapore
[3] Natl Univ Singapore, Dept Comp Sci, Singapore, Singapore
[4] Rutgers State Univ, Sch Commun & Informat, New Brunswick, NJ USA
关键词
multimodal fusion; situation understanding; multisensor data analysis; events; IMAGE FUSION;
D O I
10.1109/ISM.2016.64
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years have seen a significant increase in the number of sensors and resulting event related sensor data, allowing for a better monitoring and understanding of real-world events and situations. Event-related data come from not only physical sensors (e.g., CCTV cameras, webcams) but also from social or microblogging platforms (e.g., Twitter). Given the wide-spread availability of sensors, we observe that sensors of different modalities often independently observe the same events. We argue that fusing multimodal data about an event can be helpful for more accurate detection, localization and detailed description of events of interest. However, multimodal data often include noisy observations, varying information densities and heterogeneous representations, which makes the fusion a challenging task. In this paper, we propose a hybrid fusion approach that takes the spatial and semantic characteristics of sensor signals about events into account. For this, we first adopt the concept of an image-based representation that expresses the situation of particular visual concepts (e.g. "crowdedness", "people marching") called Cmage for both physical and social sensor data. Based on this Cmage representation, we model sparse sensor information using a Gaussian process, fuse multimodal event signals with a Bayesian approach, and incorporate spatial relations between the sensor and social observations. We demonstrate the effectiveness of our approach as a proof-of-concept over real-world data. Our early results show that the proposed approach can reliably reduce the sensor-related noise, locate the event place, improve event detection reliability, and add semantic context so that the fused data provides a better picture of the observed events.
引用
收藏
页码:14 / 19
页数:6
相关论文
共 50 条
  • [1] Multimodal Fusion of Muscle and Brain Signals for a Hybrid-BCI
    Leeb, Robert
    Sagha, Hesam
    Chavarriaga, Ricardo
    Millan, Jose del. R.
    2010 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2010, : 4343 - 4346
  • [2] Multimedia event detection with multimodal feature fusion and temporal concept localization
    Sangmin Oh
    Scott McCloskey
    Ilseo Kim
    Arash Vahdat
    Kevin J. Cannons
    Hossein Hajimirsadeghi
    Greg Mori
    A. G. Amitha Perera
    Megha Pandey
    Jason J. Corso
    Machine Vision and Applications, 2014, 25 : 49 - 69
  • [3] Multimedia event detection with multimodal feature fusion and temporal concept localization
    Oh, Sangmin
    McCloskey, Scott
    Kim, Ilseo
    Vahdat, Arash
    Cannons, Kevin J.
    Hajimirsadeghi, Hossein
    Mori, Greg
    Perera, A. G. Amitha
    Pandey, Megha
    Corso, Jason J.
    MACHINE VISION AND APPLICATIONS, 2014, 25 (01) : 49 - 69
  • [4] A survey of multimodal event detection based on data fusion
    Mondal, Manuel
    Khayati, Mourad
    Sandlin, Hong-an
    Cudre-Mauroux, Philippe
    VLDB JOURNAL, 2025, 34 (01):
  • [5] Modulation Recognition of Communication Signals Based on Multimodal Feature Fusion
    Zhang, Xinliang
    Li, Tianyun
    Gong, Pei
    Liu, Renwei
    Zha, Xiong
    SENSORS, 2022, 22 (17)
  • [6] Rank Based Hybrid Multimodal Fusion Using PSO
    Kumar, Amioy
    Hanmandlu, Madasu
    Sharma, Vaibhav
    Gupta, H. M.
    SWARM, EVOLUTIONARY, AND MEMETIC COMPUTING, PT I, 2011, 7076 : 217 - 224
  • [7] Data fusion of multimodal cardiovascular signals
    Er, Kenneth
    Acharya U, Rajendra
    Kannathal, N.
    Min, Lim Choo
    2005 27TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, VOLS 1-7, 2005, : 4689 - 4692
  • [8] Crisis event summary generative model based on hierarchical multimodal fusion
    Wang, Jing
    Yang, Shuo
    Zhao, Hui
    PATTERN RECOGNITION, 2023, 144
  • [9] A novel fusion strategy for locomotion activity recognition based on multimodal signals
    Hu, Fo
    Wang, Hong
    Feng, Naishi
    Zhou, Bin
    Wei, Chunfeng
    Lu, YanZheng
    Qi, Yangyang
    Jia, Xiaocong
    Tang, Hao
    Gouda, Mohamed Amin
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 67
  • [10] Efficient Heuristic Methods for Multimodal Fusion and Concept Fusion in Video Concept Detection
    Geng, Jie
    Miao, Zhenjiang
    Zhang, Xiao-Ping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (04) : 498 - 511