Motion recognition method of college football teaching based on convolution of spatio-temporal graph

被引:1
|
作者
Yang, Chun [1 ]
Sun, Wei [1 ]
Li, Ningning [2 ]
机构
[1] Tangshan Normal Univ, Dept Phys Educ, Tangshan, Peoples R China
[2] Hengshui Univ, Dept Phys Educ, Hengshui, Peoples R China
关键词
Space-time graph convolution; football teaching; motion recognition;
D O I
10.3233/JIFS-230890
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the past decade, people's life is getting better and better, and the attention to sports competition is also increasing. In the current information age, sports and athletes' data are very important, especially team football. In college, football coaches can use the data to analyze the situation of college football players and opposing players to better specify the corresponding tactics to win the game. However, at present, most of the data results need to be manually recorded and counted on the spot or after the game. In the process of statistics, Zhou Jing will inevitably have omissions and other problems. For this problem, a method based on space-time graph convolution. In the process, machine vision and motion recognition methods are combined, and the joint movements of different football players are extracted through the pose estimation method to obtain motion recognition results. To ented the methods on the KTH dataset. The results showed that the football motion recognition using the research method reached 98% on the dataset, which significantly improved the accuracy of nearly 5% over the existing state-of-the-art methods. At the same time, the accuracy rate of football movements was less than 5%. This means that the research method can effectively identify football sports, and can be widely used in other fields, and promote the development of human movement recognition in human-computer interaction and smart city and other fields.
引用
下载
收藏
页码:9095 / 9108
页数:14
相关论文
共 50 条
  • [21] Spatio-temporal convolution kernels
    Knauf, Konstantin
    Memmert, Daniel
    Brefeld, Ulf
    MACHINE LEARNING, 2016, 102 (02) : 247 - 273
  • [22] STCA: an action recognition network with spatio-temporal convolution and attention
    Qiuhong Tian
    Weilun Miao
    Lizao Zhang
    Ziyu Yang
    Yang Yu
    Yanying Zhao
    Lan Yao
    International Journal of Multimedia Information Retrieval, 2025, 14 (1)
  • [23] A Spatio-Temporal Motion Network for Action Recognition Based on Spatial Attention
    Yang, Qi
    Lu, Tongwei
    Zhou, Huabing
    ENTROPY, 2022, 24 (03)
  • [24] A heterogeneous traffic spatio-temporal graph convolution model for traffic prediction
    Xu, Jinhua
    Li, Yuran
    Lu, Wenbo
    Wu, Shuai
    Li, Yan
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 641
  • [25] SPATIO-TEMPORAL ATTENTION GRAPH CONVOLUTION NETWORK FOR FUNCTIONAL CONNECTOME CLASSIFICATION
    Wang, Wenhan
    Kong, Youyong
    Hou, Zhenghua
    Yang, Chunfeng
    Yuan, Yonggui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1486 - 1490
  • [26] Spatio-temporal interactive graph convolution network for vehicle trajectory prediction
    Shen, Guojiang
    Li, Pengfei
    Chen, Zhiyu
    Yang, Yao
    Kong, Xiangjie
    INTERNET OF THINGS, 2023, 24
  • [27] Spatio-Temporal Graph Attention Convolution Network for Traffic Flow Forecasting
    Liu, Kun
    Zhu, Yifan
    Wang, Xiao
    Ji, Hongya
    Huang, Chengfei
    TRANSPORTATION RESEARCH RECORD, 2024, 2678 (09) : 136 - 149
  • [28] Robust human action recognition based on spatio-temporal descriptors and motion temporal templates
    Dou, Jianfang
    Li, Jianxun
    OPTIK, 2014, 125 (07): : 1891 - 1896
  • [29] ACTION RECOGNITION USING SPATIO-TEMPORAL DIFFERENTIAL MOTION
    Yadav, Gaurav Kumar
    Sethi, Amit
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 3415 - 3419
  • [30] Human action recognition based on graph-embedded spatio-temporal subspace
    Tseng, Chien-Chung
    Chen, Ju-Chin
    Fang, Ching-Hsien
    Lien, Jenn-Jier James
    PATTERN RECOGNITION, 2012, 45 (10) : 3611 - 3624