An end-to-end hand action recognition framework based on cross-time mechanomyography signals

被引:0
|
作者
Zhang, Yue [1 ]
Li, Tengfei [1 ]
Zhang, Xingguo [1 ]
Xia, Chunming [2 ]
Zhou, Jie [1 ]
Sun, Maoxun [3 ]
机构
[1] Nantong Univ, Sch Mech Engn, Nantong 226019, Peoples R China
[2] East China Univ Sci & Technol, Sch Mech & Power Engn, Shanghai 200237, Peoples R China
[3] Univ Shanghai Sci & Technol, Sch Mech Engn, Shanghai 200093, Peoples R China
基金
中国国家自然科学基金;
关键词
Densely connected convolutional networks (DenseNet); Hand action recognition; Mechanomyography (MMG); CLASSIFICATION;
D O I
10.1007/s40747-024-01541-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The susceptibility of mechanomyography (MMG) signals acquisition to sensor donning and doffing, and the apparent time-varying characteristics of biomedical signals collected over different periods, inevitably lead to a reduction in model recognition accuracy. To investigate the adverse effects on the recognition results of hand actions, a 12-day cross-time MMG data collection experiment with eight subjects was conducted by an armband, then a novel MMG-based hand action recognition framework with densely connected convolutional networks (DenseNet) was proposed. In this study, data from 10 days were selected as a training subset, and the remaining data from another 2 days were used as a test set to evaluate the model's performance. As the number of days in the training set increases, the recognition accuracy increases and becomes more stable, peaking when the training set includes 10 days and achieving an average recognition rate of 99.57% (+/- 0.37%). In addition, part of the training subset is extracted and recombined into a new dataset and the better classification performances of models can be achieved from the test set. The method proposed effectively mitigates the adverse effects of sensor donning and doffing on recognition results.
引用
收藏
页码:6953 / 6964
页数:12
相关论文
共 50 条
  • [1] End-to-End Ultrasonic Hand Gesture Recognition
    Fertl, Elfi
    Nguyen, Do Dinh Tan
    Krueger, Martin
    Stettinger, Georg
    Padial-Allue, Ruben
    Castillo, Encarnacion
    Cuellar, Manuel P.
    [J]. SENSORS, 2024, 24 (09)
  • [2] GeometryMotion-Transformer: An End-to-End Framework for 3D Action Recognition
    Liu, Jiaheng
    Guo, Jinyang
    Xu, Dong
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 5649 - 5661
  • [3] An End-to-End Face Recognition System Evaluation Framework
    West Virginia University
    [J].
  • [4] An end-to-end generative framework for video segmentation and recognition
    Kuehne, Hilde
    Gall, Juergen
    Serre, Thomas
    [J]. 2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,
  • [5] cosGCTFormer: An end-to-end driver state recognition framework
    Huang, Jing
    Liu, Tingnan
    Hu, Lin
    [J]. Expert Systems with Applications, 2025, 261
  • [6] An End-To-End Emotion Recognition Framework Based on Temporal Aggregation of Multimodal Information
    Radoi, Anamaria
    Birhala, Andreea
    Ristea, Nicolae-Catalin
    Dutu, Liviu-Cristian
    [J]. IEEE ACCESS, 2021, 9 : 135559 - 135570
  • [7] Tibetan-Mandarin Bilingual Speech Recognition Based on End-to-End Framework
    Wang, Qingnan
    Guo, Wu
    Chen, Peixin
    Song, Yan
    [J]. 2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 1214 - 1217
  • [8] Depth-based end-to-end deep network for human action recognition
    Chaudhary, Sachin
    Murala, Subrahmanyam
    [J]. IET COMPUTER VISION, 2019, 13 (01) : 15 - 22
  • [9] SmartEEG: An End-to-End Framework for the Analysis and Classification of EEG signals
    Ciurea, Alexe
    Manoila, Cristina-Petruta
    Ionescu, Bogdan
    [J]. 2021 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB 2021), 9TH EDITION, 2021,
  • [10] End-to-End Multimodal Emotion Recognition Based on Facial Expressions and Remote Photoplethysmography Signals
    Li, Jixiang
    Peng, Jianxin
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6054 - 6063