Multi-modality Sensor Data Classification with Selective Attention

被引:0
|
作者
Zhang, Xiang [1 ]
Yao, Lina [1 ]
Huang, Chaoran [1 ]
Wang, Sen [2 ]
Tan, Mingkui [3 ]
Long, Guodong [4 ]
Wang, Can [2 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW, Australia
[2] Griffith Univ, Sch Informat & Commun Technol, Nathan, Qld, Australia
[3] South China Univ Technol, Sch Software Engn, Guangzhou, Peoples R China
[4] Univ Technol Sydney, Ctr Quantum Computat & Intelligent Syst, Sydney, NSW, Australia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.
引用
收藏
页码:3111 / 3117
页数:7
相关论文
共 50 条
  • [21] Multi-modality fusion of floor and ambulatory sensors for gait classification
    Yunas, Syed Usama
    Alharthi, Abdullah
    Ozanyan, Krikor B.
    2019 IEEE 28TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2019, : 1467 - 1472
  • [22] Hierarchical Hyperlingual-Words for Multi-Modality Face Classification
    Shao, Ming
    Fu, Yun
    2013 10TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), 2013,
  • [23] Learning Latent Factors in Linked Multi-modality Data
    He, Tiantian
    Chan, Keith C. C.
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2018), 2018, 11177 : 214 - 224
  • [24] Combining multi-modality data for searching biomarkers in schizophrenia
    Guo, Shuixia
    Huang, Chu-Chung
    Zhao, Wei
    Yang, Albert C.
    Lin, Ching-Po
    Nichols, Thomas
    Tsai, Shih-Jen
    PLOS ONE, 2018, 13 (02):
  • [25] Joint Robust Imputation and Classification for Early Dementia Detection Using Incomplete Multi-modality Data
    Kim-Han Thung
    Pew-Thian Yap
    Shen, Dinggang
    PREDICTIVE INTELLIGENCE IN MEDICINE, 2018, 11121 : 51 - 59
  • [26] Domain Adaptive Multi-Modality Neural Attention Network for Financial Forecasting
    Zhou, Dawei
    Zheng, Lecheng
    Zhu, Yada
    Li, Jianbo
    He, Jingrui
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 2230 - 2240
  • [27] Multi-modality Network Based on CGAN and Attention Mechanism for Glaucoma Grading
    Liu, Ling
    Peng, Yuanyuan
    Xiang, Dehui
    Shi, Fei
    Chen, Xinjian
    MEDICAL IMAGING 2023, 2023, 12464
  • [28] MCAPR: Multi-modality Cross Attention for Camera Absolute Pose Regression
    Shu, Qiqi
    Luan, Zhaoliang
    Poslad, Stefan
    Bourguet, Marie-Luce
    Xu, Meng
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II, 2023, 14255 : 434 - 445
  • [29] Multi-Modality Global Fusion Attention Network for Visual Question Answering
    Yang, Cheng
    Wu, Weijia
    Wang, Yuxing
    Zhou, Hong
    ELECTRONICS, 2020, 9 (11) : 1 - 12
  • [30] Multi-Modality and Multi-Scale Attention Fusion Network for Land Cover Classification from VHR Remote Sensing Images
    Lei, Tao
    Li, Linze
    Lv, Zhiyong
    Zhu, Mingzhe
    Du, Xiaogang
    Nandi, Asoke K.
    REMOTE SENSING, 2021, 13 (18)