Spatio-temporal segments attention for skeleton-based action recognition

被引:13
|
作者
Qiu, Helei [1 ]
Hou, Biao [1 ]
Ren, Bo [1 ]
Zhang, Xiaohua [1 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; Skeleton; Self-attention; Spatio-temporal joints; Feature aggregation; NETWORKS;
D O I
10.1016/j.neucom.2022.10.084
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capturing the dependencies between joints is critical in skeleton-based action recognition. However, the existing methods cannot effectively capture the correlation of different joints between frames, which is very useful since different body parts (such as the arms and legs in "long jump") between adjacent frames move together. Focus on this issue, a novel spatio-temporal segments attention method is proposed. The skeleton sequence is divided into several segments, and several consecutive frames contained in each segment are encoded. And then an intra-segment self-attention module is proposed to capture the rela-tionship of different joints in consecutive frames. In addition, an inter-segment action attention module is introduced to capture the relationship between segments to enhance the ability to distinguish similar actions. Compared with the state-of-the-art methods, our method achieves better performance on two large-scale datasets. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:30 / 38
页数:9
相关论文
共 50 条
  • [1] Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition
    Lee, Jungho
    Lee, Minhyeok
    Cho, Suhwan
    Woo, Sungmin
    Jang, Sungjun
    Lee, Sangyoun
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10221 - 10230
  • [2] Spatio-temporal stacking model for skeleton-based action recognition
    Yufeng Zhong
    Qiuyan Yan
    [J]. Applied Intelligence, 2022, 52 : 12116 - 12130
  • [3] Spatio-Temporal Graph Routing for Skeleton-Based Action Recognition
    Li, Bin
    Li, Xi
    Zhang, Zhongfei
    Wu, Fei
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 8561 - 8568
  • [4] Spatio-Temporal Difference Descriptor for Skeleton-Based Action Recognition
    Ding, Chongyang
    Liu, Kai
    Korhonen, Jari
    Belyaev, Evgeny
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1227 - 1235
  • [5] Spatio-temporal stacking model for skeleton-based action recognition
    Zhong, Yufeng
    Yan, Qiuyan
    [J]. APPLIED INTELLIGENCE, 2022, 52 (11) : 12116 - 12130
  • [6] Spatio-temporal hard attention learning for skeleton-based activity recognition
    Nikpour, Bahareh
    Armanfard, Narges
    [J]. PATTERN RECOGNITION, 2023, 139
  • [7] Transforming spatio-temporal self-attention using action embedding for skeleton-based action recognition
    Ahmad, Tasweer
    Rizvi, Syed Tahir Hussain
    Kanwal, Neel
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [8] On the spatial attention in spatio-temporal graph convolutional networks for skeleton-based human action recognition
    Heidari, Negar
    Iosifidis, Alexandros
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Decoupled spatio-temporal grouping transformer for skeleton-based action recognition
    Sun, Shengkun
    Jia, Zihao
    Zhu, Yisheng
    Liu, Guangcan
    Yu, Zhengtao
    [J]. VISUAL COMPUTER, 2024, 40 (08): : 5733 - 5745
  • [10] Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition
    Ke, Lipeng
    Peng, Kuan-Chuan
    Lyu, Siwei
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1131 - 1139