Fast 3D-graph convolutional networks for skeleton-based action recognition

被引:1
|
作者
Zhang, Guohao [1 ]
Wen, Shuhuan [1 ]
Li, Jiaqi [1 ]
Che, Haijun [1 ]
机构
[1] Yanshan Univ, Dept Key Lab Ind Comp Control Engn Hebei Prov, Qinhuangdao 066004, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; Human skeleton; Graph convolutional; Knowledge distillation;
D O I
10.1016/j.asoc.2023.110575
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research on human action recognition based on skeletons has received much attention. But most of the research focuses on improving the model's generalization ability, while ignoring significant efficiency issues. This leads to developing heavy models with poor scalability and cost-effectiveness in practical use. This paper, we investigate the under-studied but practically critical recognition model efficiency problem. To this end, we present a new Fast Recognition Distillation (FRD) model learning strategy. Specifically, FRD trains a lightweight recognition neural network structure that can be quickly executed at a low computational cost. It can be achieved by effectively disseminating the identification probability information of the teacher network to the lightweight network. We call the probability information of the teacher network as soft-target, and FRD can learn more potential information from soft-target. In addition, we also used a particular loss function for soft-target. Through the FRD network, while basically maintaining the recognition accuracy, we minimized the network structure. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that our model (FRD) is more lightweight and refined than others. Therefore, our model FRD is efficient. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] A lightweight graph convolutional network for skeleton-based action recognition
    Dinh-Tan Pham
    Quang-Tien Pham
    Tien-Thanh Nguyen
    Thi-Lan Le
    Hai Vu
    Multimedia Tools and Applications, 2023, 82 : 3055 - 3079
  • [32] Ghost Graph Convolutional Network for Skeleton-based Action Recognition
    Jang, Sungjun
    Lee, Heansung
    Cho, Suhwan
    Woo, Sungmin
    Lee, Sangyoun
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,
  • [33] Shallow Graph Convolutional Network for Skeleton-Based Action Recognition
    Yang, Wenjie
    Zhang, Jianlin
    Cai, Jingju
    Xu, Zhiyong
    SENSORS, 2021, 21 (02) : 1 - 14
  • [34] A lightweight graph convolutional network for skeleton-based action recognition
    Pham, Dinh-Tan
    Pham, Quang-Tien
    Nguyen, Tien-Thanh
    Le, Thi-Lan
    Vu, Hai
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (02) : 3055 - 3079
  • [35] Shuffle Graph Convolutional Network for Skeleton-Based Action Recognition
    Yu, Qiwei
    Dai, Yaping
    Hirota, Kaoru
    Shao, Shuai
    Dai, Wei
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2023, 27 (05) : 790 - 800
  • [36] GRAPH CONVOLUTIONAL LSTM MODEL FOR SKELETON-BASED ACTION RECOGNITION
    Zhang, Han
    Song, Yonghong
    Zhang, Yuanlin
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 412 - 417
  • [37] Feedback Graph Convolutional Network for Skeleton-Based Action Recognition
    Yang, Hao
    Yan, Dan
    Zhang, Li
    Sun, Yunda
    Li, Dong
    Maybank, Stephen J.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 164 - 175
  • [38] Deformable graph convolutional transformer for skeleton-based action recognition
    Shuo Chen
    Ke Xu
    Bo Zhu
    Xinghao Jiang
    Tanfeng Sun
    Applied Intelligence, 2023, 53 : 15390 - 15406
  • [39] Deformable graph convolutional transformer for skeleton-based action recognition
    Chen, Shuo
    Xu, Ke
    Zhu, Bo
    Jiang, Xinghao
    Sun, Tanfeng
    APPLIED INTELLIGENCE, 2023, 53 (12) : 15390 - 15406
  • [40] Hierarchical Graph Convolutional Network for Skeleton-Based Action Recognition
    Huang, Linjiang
    Huang, Yan
    Ouyang, Wanli
    Wang, Liang
    IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 93 - 102