Multimodal cooperative self-attention network for action recognition

被引:2
|
作者
Zhong, Zhuokun [1 ]
Hou, Zhenjie [1 ]
Liang, Jiuzhen [1 ]
Lin, En [2 ]
Shi, Haiyong [1 ]
机构
[1] Changzhou Univ, Sch Comp & Artificial Intelligence, Changzhou 213000, Peoples R China
[2] Goldcard Smart Grp Co Ltd, Hangzhou, Peoples R China
关键词
computer vision; image fusion;
D O I
10.1049/ipr2.12754
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal human behaviour recognition is a research hotspot in computer vision. To fully use both skeleton and depth data, this paper constructs a new multimodal network identification scheme combined with the self-attention mechanism. The system comprises a transformer-based skeleton self-attention subnetwork and a depth self-attention subnetwork based on CNN. In the skeleton self-attention subnetwork, this paper proposes a motion synergy space feature that can integrate the information of each joint point according to the entirety and synergy of human motion and puts forward a quantitative standard for the contribution degree of each joint motion. In this paper, the results from the skeleton self-attention subnetwork and the depth self-attention subnetwork are integrated and they are verified on the NTU RGB+D and UTD-MHAD datasets. The authors have achieved 90% recognition rate on UTD-MHAD dataset, and the CS recognition rate of the authors' method on the NTU RGB+D dataset reaches 90.5% and the recognition rate of CV is 94.7%. Experimental results show that the network structure proposed in this paper achieves a high recognition rate, and its performance is better than most current methods.
引用
收藏
页码:1775 / 1783
页数:9
相关论文
共 50 条
  • [41] Spatial-temporal injection network: exploiting auxiliary losses for action recognition with apparent difference and self-attention
    Cao, Haiwen
    Wu, Chunlei
    Lu, Jing
    Wu, Jie
    Wang, Leiquan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 1173 - 1180
  • [42] An Effective Video Transformer With Synchronized Spatiotemporal and Spatial Self-Attention for Action Recognition
    Alfasly, Saghir
    Chui, Charles K.
    Jiang, Qingtang
    Lu, Jian
    Xu, Chen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2496 - 2509
  • [43] Residual Self-Calibration and Self-Attention Aggregation Network for Crop Disease Recognition
    Zhang, Qiang
    Sun, Banyong
    Cheng, Yaxiong
    Li, Xijie
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2021, 18 (16)
  • [44] Multimodal Modules and Self-Attention for Graph Neural Network Molecular Properties Prediction Model
    Punnachaiya, Kamol
    Vateekul, Peerapon
    Wichadakul, Duangdao
    2023 11TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND COMPUTATIONAL BIOLOGY, ICBCB, 2023, : 141 - 146
  • [45] Self-attention Hypergraph Pooling Network
    Zhao Y.-F.
    Jin F.-S.
    Li R.-H.
    Qin H.-C.
    Cui P.
    Wang G.-R.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (10):
  • [46] Multimodal Fusion Method Based on Self-Attention Mechanism
    Zhu, Hu
    Wang, Ze
    Shi, Yu
    Hua, Yingying
    Xu, Guoxia
    Deng, Lizhen
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2020, 2020
  • [47] Attention to Emotions: Body Emotion Recognition In-the-Wild Using Self-attention Transformer Network
    Paiva, Pedro V. V.
    Ramos, Josue J. G.
    Gavrilova, Marina
    Carvalho, Marco A. G.
    COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VISIGRAPP 2023, 2024, 2103 : 206 - 228
  • [48] A self-attention network for smoke detection
    Jiang, Minghua
    Zhao, Yaxin
    Yu, Feng
    Zhou, Changlong
    Peng, Tao
    FIRE SAFETY JOURNAL, 2022, 129
  • [49] Relevance, valence, and the self-attention network
    Mattan, Bradley D.
    Quinn, Kimberly A.
    Rotshtein, Pia
    COGNITIVE NEUROSCIENCE, 2016, 7 (1-4) : 27 - 28
  • [50] Saliency guided self-attention network for pedestrian attribute recognition in surveillance scenarios
    Li Na
    Wu Yangyang
    Liu Ying
    Li Daxiang
    Gao Jiale
    TheJournalofChinaUniversitiesofPostsandTelecommunications, 2022, 29 (05) : 21 - 29