A Deep Dilated Convolutional Self-attention Model for Multimodal Human Activity Recognition

被引:1
|
作者
Wang, Shengzhi [1 ]
Xiao, Shuo [1 ]
Wang, Yu [1 ]
Jiang, Haifeng [1 ]
Zhang, Guopeng [1 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR56361.2022.9956723
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Wearable-sensor-based Human Activity Recognition (HAR) has long been a hot topic in ubiquitous computing, which is benefit by the success of deep learning algorithms. The critical difficulties in multimodal sensing environments are how to represent the spatial-temporal dependencies while concurrently extracting features with high characterization. In this work, we propose a self-attention based deep dilated convolution network. Our method uses two channels, named temporal channel and spatial channel, respectively, to extract the readings-over-time and time-over-readings features from sensor signals. The self-attention mechanism helps directly capture the long time dependence of sensor signals. To extract local features while expanding the receptive field and avoiding information loss caused by pooling and upsampling, we use deep dilated convolution, which expanding the receptive field and avoiding information loss caused by pooling and upsampling. Extensive experiments on a self-built dataset and two available benchmark datasets (PAMAP2, OPPORTUNITY) reveal that the effectiveness of our proposed model is more competitive than the state-of-the-art methods in HAR tasks.
引用
收藏
页码:791 / 797
页数:7
相关论文
共 50 条
  • [1] Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor
    Khatun, Mst Alema
    Abu Yousuf, Mohammad
    Ahmed, Sabbir
    Uddin, Md Zia
    Alyami, Salem A.
    Al-Ashhab, Samer
    Akhdar, Hanan F.
    Khan, Asaduzzaman
    Azad, Akm
    Moni, Mohammad Ali
    [J]. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE, 2022, 10
  • [2] Multimodal cooperative self-attention network for action recognition
    Zhong, Zhuokun
    Hou, Zhenjie
    Liang, Jiuzhen
    Lin, En
    Shi, Haiyong
    [J]. IET IMAGE PROCESSING, 2023, 17 (06) : 1775 - 1783
  • [3] ConViViT - A Deep Neural Network Combining Convolutions and Factorized Self-Attention for Human Activity Recognition
    Dokkar, Rachid Reda
    Chaieb, Faten
    Drira, Hassen
    Aberkane, Arezki
    [J]. 2023 IEEE 25TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, MMSP, 2023,
  • [4] Automatic Food Recognition Using Deep Convolutional Neural Networks with Self-attention Mechanism
    Rahib Abiyev
    Joseph Adepoju
    [J]. Human-Centric Intelligent Systems, 2024, 4 (1): : 171 - 186
  • [5] The Multimodal Scene Recognition Method Based on Self-Attention and Distillation
    Sun, Ning
    Xu, Wei
    Liu, Jixin
    Chai, Lei
    Sun, Haian
    [J]. IEEE Multimedia, 2024, 31 (04) : 25 - 36
  • [6] Masked face recognition with convolutional visual self-attention network
    Ge, Yiming
    Liu, Hui
    Du, Junzhao
    Li, Zehua
    Wei, Yuheng
    [J]. NEUROCOMPUTING, 2023, 518 : 496 - 506
  • [7] Human Activity Recognition Based on Self-Attention Mechanism in WiFi Environment
    Ge, Fei
    Yang, Zhimin
    Dai, Zhenyang
    Tan, Liansheng
    Hu, Jianyuan
    Li, Jiayuan
    Qiu, Han
    [J]. IEEE ACCESS, 2024, 12 : 85231 - 85243
  • [8] Self-Attention Networks for Human Activity Recognition Using Wearable Devices
    Betancourt, Carlos
    Chen, Wen-Hui
    Kuan, Chi-Wei
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 1194 - 1199
  • [9] Wearable sensors for human activity recognition based on a self-attention CNN-BiLSTM model
    Guo, Huafeng
    Xiang, Changcheng
    Chen, Shiqiang
    [J]. SENSOR REVIEW, 2023, 43 (5/6) : 347 - 358
  • [10] SELF-ATTENTION GUIDED DEEP FEATURES FOR ACTION RECOGNITION
    Xiao, Renyi
    Hou, Yonghong
    Guo, Zihui
    Li, Chuankun
    Wang, Pichao
    Li, Wanqing
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1060 - 1065