Multi-modality Sensor Data Classification with Selective Attention

被引:0
|
作者
Zhang, Xiang [1 ]
Yao, Lina [1 ]
Huang, Chaoran [1 ]
Wang, Sen [2 ]
Tan, Mingkui [3 ]
Long, Guodong [4 ]
Wang, Can [2 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW, Australia
[2] Griffith Univ, Sch Informat & Commun Technol, Nathan, Qld, Australia
[3] South China Univ Technol, Sch Software Engn, Guangzhou, Peoples R China
[4] Univ Technol Sydney, Ctr Quantum Computat & Intelligent Syst, Sydney, NSW, Australia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.
引用
收藏
页码:3111 / 3117
页数:7
相关论文
共 50 条
  • [41] A Survey of Data Representation for Multi-Modality Event Detection and Evolution
    Xiao, Kejing
    Qian, Zhaopeng
    Qin, Biao
    APPLIED SCIENCES-BASEL, 2022, 12 (04):
  • [42] Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention
    Fu, Changzeng
    Liu, Chaoran
    Ishi, Carlos Toshinori
    Ishiguro, Hiroshi
    SENSORS, 2020, 20 (17) : 1 - 15
  • [43] Adaptive multi-modality sensor scheduling for detection and tracking of smart targets
    Kreucher, Chris
    Blatt, Doron
    Hero, Alfred
    Kastella, Keith
    DIGITAL SIGNAL PROCESSING, 2006, 16 (05) : 546 - 567
  • [44] Multi-Modality Phantom Development
    Huber, Jennifer S.
    Peng, Qiyu
    Moses, William W.
    IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2009, 56 (05) : 2722 - 2727
  • [45] Multi-modality Hierarchical Recall based on GBDTs for Bipolar Disorder Classification
    Xing, Xiaofen
    Cai, Bolun
    Zhao, Yinhu
    Li, Shuzhen
    He, Zhiwei
    Fan, Weiquan
    PROCEEDINGS OF THE 2018 AUDIO/VISUAL EMOTION CHALLENGE AND WORKSHOP (AVEC'18), 2018, : 31 - 37
  • [46] Multi-Sensor Learning Enables Information Transfer Across Different Sensory Data and Augments Multi-Modality Imaging
    Zhu, Lingting
    Chen, Yizheng
    Liu, Lianli
    Xing, Lei
    Yu, Lequan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (01) : 288 - 304
  • [47] Multi-modality imaging on track
    Beekman, Freek
    Hutton, Brian F.
    EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2007, 34 (09) : 1410 - 1414
  • [48] Multi-modality helps in crisis management: An attention-based deep learning approach of leveraging text for image classification
    Ahmad, Zishan
    Jindal, Raghav
    Mukuntha, N. S.
    Ekbal, Asif
    Bhattachharyya, Pushpak
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 195
  • [49] CMR-net: A cross modality reconstruction network for multi-modality remote sensing classification
    Wang, Huiqing
    Wang, Huajun
    Wu, Lingfeng
    PLOS ONE, 2024, 19 (06):
  • [50] DuKA: A Dual-Keyless-Attention Model for Multi-Modality EHR Data Fusion and Organ Failure Prediction
    Liu, Zhangdaihong
    Wu, Xuan
    Yang, Yang
    Clifton, David A.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2024, 71 (04) : 1247 - 1256