Improving Badminton Action Recognition Using Spatio-Temporal Analysis and a Weighted Ensemble Learning Model

被引:0
|
作者
Asriani, Farida [1 ,2 ]
Azhari, Azhari [1 ]
Wahyono, Wahyono [1 ]
机构
[1] Univ Gadjah Mada, Dept Comp Sci & Elect, Yogyakarta 55281, Indonesia
[2] Univ Jenderal Soedirman, Elect Engn Dept, Purbalingga 53371, Indonesia
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2024年 / 81卷 / 02期
关键词
Weighted ensemble learning; badminton action; soft voting classifier; joint skeleton; fast dynamic time warping; spatiotemporal;
D O I
10.32604/cmc.2024.058193
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Incredible progress has been made in human action recognition (HAR), significantly impacting computer vision applications in sports analytics. However, identifying dynamic and complex movements in sports like badminton remains challenging due to the need for precise recognition accuracy and better management of complex motion patterns. Deep learning techniques like convolutional neural networks (CNNs), long short-term memory (LSTM), and graph convolutional networks (GCNs) improve recognition in large datasets, while the traditional machine learning methods like SVM (support vector machines), RF (random forest), and LR (logistic regression), combined with handcrafted features and ensemble approaches, perform well but struggle with the complexity of fast-paced sports like badminton. We proposed an ensemble learning model combining support vector machines (SVM), logistic regression (LR), random forest (RF), and adaptive boosting (AdaBoost) for badminton action recognition. The data in this study consist of video recordings of badminton stroke techniques, which have been extracted into spatiotemporal data. The three-dimensional distance between each skeleton point and the right hip represents the spatial features. The temporal features are the results of Fast Dynamic Time Warping (FDTW) calculations applied to 15 frames of each video sequence. The weighted ensemble model employs soft voting classifiers from SVM, LR, RF, and AdaBoost to enhance the accuracy of badminton action recognition. The E2 ensemble model, which combines SVM, LR, and AdaBoost, achieves the highest accuracy of 95.38%.
引用
收藏
页码:3079 / 3096
页数:18
相关论文
共 50 条
  • [21] LEARNING A HIERARCHICAL SPATIO-TEMPORAL MODEL FOR HUMAN ACTIVITY RECOGNITION
    Xu, Wanru
    Miao, Zhenjiang
    Zhang, Xiao-Ping
    Tian, Yi
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 1607 - 1611
  • [22] Action Recognition with Multiscale Spatio-Temporal Contexts
    Wang, Jiang
    Chen, Zhuoyuan
    Wu, Ying
    2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011,
  • [23] Efficient spatio-temporal network for action recognition
    Su, Yanxiong
    Zhao, Qian
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2024, 21 (05)
  • [24] Action recognition by spatio-temporal oriented energies
    Zhen, Xiantong
    Shao, Ling
    Li, Xuelong
    INFORMATION SCIENCES, 2014, 281 : 295 - 309
  • [25] Spatio-temporal information for human action recognition
    Yao, Li
    Liu, Yunjian
    Huang, Shihui
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2016,
  • [26] Spatio-temporal information for human action recognition
    Li Yao
    Yunjian Liu
    Shihui Huang
    EURASIP Journal on Image and Video Processing, 2016
  • [27] Spatio-Temporal Fusion Networks for Action Recognition
    Cho, Sangwoo
    Foroosh, Hassan
    COMPUTER VISION - ACCV 2018, PT I, 2019, 11361 : 347 - 364
  • [28] Spatio-Temporal Pyramid Model Based on Depth Maps for Action Recognition
    Xu, Haining
    Chen, Enqing
    Liang, Chengwu
    Qi, Lin
    Guan, Ling
    2015 IEEE 17TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2015,
  • [29] Spatio-temporal stacking model for skeleton-based action recognition
    Zhong, Yufeng
    Yan, Qiuyan
    APPLIED INTELLIGENCE, 2022, 52 (11) : 12116 - 12130
  • [30] Spatio-Temporal Weighted Posture Motion Features for Human Skeleton Action Recognition Research
    Ding C.-Y.
    Liu K.
    Li G.
    Yan L.
    Chen B.-Y.
    Zhong Y.-M.
    Jisuanji Xuebao/Chinese Journal of Computers, 2020, 43 (01): : 29 - 40