Direction-guided two-stream convolutional neural networks for skeleton-based action recognition

被引:0
|
作者
Benyue Su
Peng Zhang
Manzhen Sun
Min Sheng
机构
[1] Anqing Normal University,Key Laboratory of Intelligent Perception and Computing of Anhui Province
[2] Anqing Normal University,School of Computer and Information
[3] Tongling University,School of Mathematics and Computer
[4] Anqing Normal University,School of Mathematics and Physics
来源
Soft Computing | 2023年 / 27卷
关键词
Action recognition; Skeleton data; Direction; Edge-level information; Motion information; Feature fusion;
D O I
暂无
中图分类号
学科分类号
摘要
In skeleton-based action recognition, treating skeleton data as pseudoimages using convolutional neural networks (CNNs) has proven to be effective. However, among existing CNN-based approaches, most focus on modeling information at the joint-level ignoring the size and direction information of the skeleton edges, which play an important role in action recognition, and these approaches may not be optimal. In addition, combining the directionality of human motion to portray action motion variation information is rarely considered in existing approaches, although it is more natural and reasonable for action sequence modeling. In this work, we propose a novel direction-guided two-stream convolutional neural network for skeleton-based action recognition. In the first stream, our model focuses on our defined edge-level information (including edge and edge_motion information) with directionality in the skeleton data to explore the spatiotemporal features of the action. In the second stream, since the motion is directional, we define different skeleton edge directions and extract different motion information (including translation and rotation information) in different directions to better exploit the motion features of the action. In addition, we propose a description of human motion inscribed by a combination of translation and rotation, and explore how they are integrated. We conducted extensive experiments on two challenging datasets, the NTU-RGB+D 60 and NTU-RGB+D 120 datasets, to verify the superiority of our proposed method over state-of-the-art methods. The experimental results demonstrate that the proposed direction-guided edge-level information and motion information complement each other for better action recognition.
引用
收藏
页码:11833 / 11842
页数:9
相关论文
共 50 条
  • [41] Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition
    Zhang, Pengfei
    Lan, Cuiling
    Zeng, Wenjun
    Xing, Junliang
    Xue, Jianru
    Zheng, Nanning
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1109 - 1118
  • [42] Transferable two-stream convolutional neural network for human action recognition
    Xiong, Qianqian
    Zhang, Jianjing
    Wang, Peng
    Liu, Dongdong
    Gao, Robert X.
    JOURNAL OF MANUFACTURING SYSTEMS, 2020, 56 : 605 - 614
  • [43] Two-Stream Adaptive Attention Graph Convolutional Networks for Action Recognition
    Du Q.
    Xiang Z.
    Tian L.
    Yu L.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2022, 50 (12): : 20 - 29
  • [44] Skeleton-Based Action Recognition with Directed Graph Neural Networks
    Shi, Lei
    Zhang, Yifan
    Cheng, Jian
    Lu, Hanqing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7904 - 7913
  • [45] Graph Convolutional Networks Skeleton-based Action Recognition for Continuous Data Stream: A Sliding Window Approach
    Delamare, Mickael
    Laville, Cyril
    Cabani, Adnane
    Chafouk, Houcine
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP, 2021, : 427 - 435
  • [46] DeGCN: Deformable Graph Convolutional Networks for Skeleton-Based Action Recognition
    Myung, Woomin
    Su, Nan
    Xue, Jing-Hao
    Wang, Guijin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 2477 - 2490
  • [47] Skeleton-based action recognition using spatio-temporal features with convolutional neural networks
    Rostami, Zahra
    Afrasiabi, Mahlagha
    Khotanlou, Hassan
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED ENGINEERING AND INNOVATION (KBEI), 2017, : 583 - 587
  • [48] Temporal segment graph convolutional networks for skeleton-based action recognition
    Ding, Chongyang
    Wen, Shan
    Ding, Wenwen
    Liu, Kai
    Belyaev, Evgeny
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 110
  • [49] Information Enhanced Graph Convolutional Networks for Skeleton-based Action Recognition
    Sun, Dengdi
    Zeng, Fanchen
    Luo, Bin
    Tang, Jin
    Ding, Zhuanlian
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [50] Hierarchically Decomposed Graph Convolutional Networks for Skeleton-Based Action Recognition
    Lee, Jungho
    Lee, Minhyeok
    Lee, Dogyoon
    Lee, Sangyoun
    Proceedings of the IEEE International Conference on Computer Vision, 2023, : 10410 - 10419