LEARNING EXPLICIT SHAPE AND MOTION EVOLUTION MAPS FOR SKELETON-BASED HUMAN ACTION RECOGNITION

被引:0
|
作者
Liu, Hong [1 ]
Tu, Juanhui [1 ]
Liu, Mengyuan [2 ]
Ding, Runwei [1 ]
机构
[1] Peking Univ, Key Lab Machine Percept, Shenzhen Grad Sch, Beijing, Peoples R China
[2] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Human Action Recognition; Skeleton Sequences; Long Short-Term Memory; Depth Sensor;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Human action recognition based on skeleton sequences has wide applications in human-computer interaction and intelligent surveillance. Although previous methods have successfully applied Long Short-Term Memory(LSTM) networks to model shape evolution of human actions, it still remains a problem to efficiently recognize actions, especially for similar actions from sequential data due to the lack of the details of motion. To solve this problem, this paper presents an improved LSTM-based network to jointly learn explicit long-term shape evolution maps (SEM) and motion evolution maps (MEM). Firstly, human actions are represented as compact SEM and MEM, which mutually compensate. Secondly, these maps are jointly learned by deep LSTM networks to explore high-level temporal dependencies. Then, a weighted aggregate layer (WAL) is designed to aggregate outputs of LSTM networks cross different temporal stages. Finally, deep features of shape and motion are combined by decision level fusion. Experimental results on the currently largest NTU RGB+D dataset and public SmartHome dataset verify that our method significantly outperforms the state-of-the-arts.
引用
收藏
页码:1333 / 1337
页数:5
相关论文
共 50 条
  • [21] A Short Survey on Deep Learning for Skeleton-based Action Recognition
    Wang, Wei
    Zhang, Yu-Dong
    COMPANION PROCEEDINGS OF THE 14TH IEEE/ACM INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC'21 COMPANION), 2021,
  • [22] Revisiting Skeleton-based Action Recognition
    Duan, Haodong
    Zhao, Yue
    Chen, Kai
    Lin, Dahua
    Dai, Bo
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2959 - 2968
  • [23] Action Tree Convolutional Networks: Skeleton-Based Human Action Recognition
    Liu, Wenjie
    Zhang, Ziyi
    Han, Bing
    Zhu, Chenhui
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 783 - 792
  • [24] LEARNING SHAPE-MOTION REPRESENTATIONS FROM GEOMETRIC ALGEBRA SPATIO-TEMPORAL MODEL FOR SKELETON-BASED ACTION RECOGNITION
    Li, Yanshan
    Xia, Rongjie
    Liu, Xing
    Huang, Qinghua
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 1066 - 1071
  • [25] Novel Motion Patterns Matter for Practical Skeleton-Based Action Recognition
    Liu, Mengyuan
    Meng, Fanyang
    Chen, Chen
    Wu, Songtao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 1701 - 1709
  • [26] Skeleton-based Action Recognition Based on Deep Learning and Grassmannian Pyramids
    Konstantinidis, Dimitrios
    Dimitropoulos, Kosmas
    Daras, Petros
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 2045 - 2049
  • [27] Reconstruction-driven contrastive learning for unsupervised skeleton-based human action recognition
    Xing Liu
    Bo Gao
    The Journal of Supercomputing, 2025, 81 (1)
  • [28] Learning Graph Convolutional Network for Skeleton-Based Human Action Recognition by Neural Searching
    Peng, Wei
    Hong, Xiaopeng
    Chen, Haoyu
    Zhao, Guoying
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 2669 - 2676
  • [29] Fusion sampling networks for skeleton-based human action recognition
    Chen, Guannan
    Wei, Shimin
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (05)
  • [30] Hierarchical Soft Quantization for Skeleton-Based Human Action Recognition
    Yang, Jianyu
    Liu, Wu
    Yuan, Junsong
    Mei, Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 883 - 898