Multi-source Learning for Skeleton-based Action Recognition Using Deep LSTM Networks

被引:0
|
作者
Cui, Ran [1 ]
Zhu, Aichun [2 ]
Zhang, Sai [1 ]
Hua, Gang [1 ]
机构
[1] China Univ Min & Technol, Xuzhou, Jiangsu, Peoples R China
[2] Nanjing Tech Univ, Sch Comp Sci & Technol, Nanjing, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine Learning; Human Action Recognition; Skeleton; Long Short-Term Memory;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skeleton-based action recognition is widely concerned because skeletal information of human body can express action features simply and clearly, and it is not affected by physical features of the human body. Therefore, in this paper, the method of action recognition is based on skeletal information extracted from RGBD video. Since the skeleton coordinates we studied are two-dimensional, our method can be applied to RGB video directly. The recently proposed method based on the deep network only focuses on the temporal dynamic of action and ignores spatial configuration. In this paper, a Multi-source model is proposed based on the fusion of the temporal and spatial models. The temporal model is divided into three branches, which perceive the global-level, local-level, and detail-level information respectively. The spatial model is used to perceive the relative position information of skeleton joints. The fusion of the two models is beneficial to improve the recognition accuracy. The proposed method is compared with the state-of-the-art methods on a large scale dataset. The experimental results demonstrate the effectiveness of our method.
引用
收藏
页码:547 / 552
页数:6
相关论文
共 50 条
  • [41] Skeleton-Based Action Recognition with Directed Graph Neural Networks
    Shi, Lei
    Zhang, Yifan
    Cheng, Jian
    Lu, Hanqing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7904 - 7913
  • [42] Fusion sampling networks for skeleton-based human action recognition
    Chen, Guannan
    Wei, Shimin
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (05)
  • [43] Selective Hypergraph Convolutional Networks for Skeleton-based Action Recognition
    Zhu, Yiran
    Huang, Guangji
    Xu, Xing
    Ji, Yanli
    Shen, Fumin
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 518 - 526
  • [44] Recurrent graph convolutional networks for skeleton-based action recognition
    Zhu, Guangming
    Yang, Lu
    Zhang, Liang
    Shen, Peiyi
    Song, Juan
    Proceedings - International Conference on Pattern Recognition, 2020, : 1352 - 1359
  • [45] Skeleton-Based Action Recognition With Gated Convolutional Neural Networks
    Cao, Congqi
    Lan, Cuiling
    Zhang, Yifan
    Zeng, Wenjun
    Lu, Hanqing
    Zhang, Yanning
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (11) : 3247 - 3257
  • [46] Recurrent Graph Convolutional Networks for Skeleton-based Action Recognition
    Zhu, Guangming
    Yang, Lu
    Zhang, Liang
    Shen, Peiyi
    Song, Juan
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1352 - 1359
  • [47] Pixel Convolutional Networks for Skeleton-Based Human Action Recognition
    Change, Zhichao
    Wang, Jiangyun
    Han, Liang
    METHODS AND APPLICATIONS FOR MODELING AND SIMULATION OF COMPLEX SYSTEMS, 2018, 946 : 513 - 523
  • [48] Joint Selection using Deep Reinforcement Learning for Skeleton-based Activity Recognition
    Nikpour, Bahareh
    Armanfard, Narges
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 1056 - 1061
  • [49] InfoGCN: Representation Learning for Human Skeleton-based Action Recognition
    Chi, Hyung-gun
    Ha, Myoung Hoon
    Chi, Seunggeun
    Lee, Sang Wan
    Huang, Qixing
    Ramani, Karthik
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20154 - 20164
  • [50] A Cross View Learning Approach for Skeleton-Based Action Recognition
    Zheng, Hui
    Zhang, Xinming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 3061 - 3072