Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition

被引:2
|
作者
Liu, Wansu [1 ]
Lu, Biao [1 ]
机构
[1] Suzhou Univ, Informat Engineenng Dept, Suzhou, Peoples R China
关键词
multistream convolutional neural networks; wearable flexibility; bionic gestures; surface muscles; feature extraction recognition;
D O I
10.3389/fbioe.2022.833793
中图分类号
Q81 [生物工程学(生物技术)]; Q93 [微生物学];
学科分类号
071005 ; 0836 ; 090102 ; 100705 ;
摘要
Surface electromyographic (sEMG) signals are weak physiological electrical signals, which are highly susceptible to coupling external noise and cause major difficulties in signal acquisition and processing. The study of using sEMG signals to analyze human motion intention mainly involves data preprocessing, feature extraction, and model classification. Feature extraction is an extremely critical part; however, this often involves many manually designed features with specialized domain knowledge, so the experimenter will spend time and effort on feature extraction. To address this problem, deep learning methods that can automatically extract features are applied to the sEMG-based gesture recognition problem, drawing on the success of deep learning for image classification. In this paper, sEMG is captured using a wearable, flexible bionic device, which is simple to operate and highly secure. A multi-stream convolutional neural network algorithm is proposed to enhance the ability of sEMG to characterize hand actions in gesture recognition. The algorithm virtually augments the signal channels by reconstructing the sample structure of the sEMG to provide richer input information for gesture recognition. The methods for noise processing, active segment detection, and feature extraction are investigated, and a basic method for gesture recognition based on the combination of multichannel sEMG signals and inertial signals is proposed. Suitable filters are designed for the common noise in the signal. An improved moving average method based on the valve domain is used to reduce the segmentation error rate caused by the short resting signal time in continuous gesture signals. In this paper, three machine learning algorithms, K-nearest neighbor, linear discriminant method, and multi-stream convolutional neural network, are used for hand action classification experiments, and the effectiveness of the multi-stream convolutional neural network algorithm is demonstrated by comparison of the results. To improve the accuracy of hand action recognition, a final 10 gesture classification accuracy of up to 93.69% was obtained. The separability analysis showed significant differences in the signals of the two cognitive-behavioral tasks when the optimal electrode combination was used. A cross-subject analysis of the test set subjects illustrated that the average correct classification rate using the pervasive electrode combination could reach 93.18%.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Three-dimensional feature maps and convolutional neural network-based emotion recognition
    Zheng, Xiangwei
    Yu, Xiaomei
    Yin, Yongqiang
    Li, Tiantian
    Yan, Xiaoyan
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2021, 36 (11) : 6312 - 6336
  • [42] A convolutional neural network and classical moments-based feature fusion model for gesture recognition
    Abul Abbas Barbhuiya
    Ram Kumar Karsh
    Rahul Jain
    [J]. Multimedia Systems, 2022, 28 : 1779 - 1792
  • [43] Skeleton Feature Fusion Based on Multi-Stream LSTM for Action Recognition
    Wang, Lei
    Zhao, Xu
    Liu, Yuncai
    [J]. IEEE ACCESS, 2018, 6 : 50788 - 50800
  • [44] Foot Gesture Recognition with Flexible High-Density Device Based on Convolutional Neural Network
    Lin, Chengyu
    Tang, Yuxuan
    Zhou, Yong
    Zhang, Kuangen
    Fan, Zixuan
    Yang, Yang
    Leng, Yuquan
    Fu, Chenglong
    [J]. 2021 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2021), 2021, : 306 - 311
  • [45] Gesture Recognition Based on Depth Information and Convolutional Neural Network
    Jiang, Du
    Li, Gongfa
    Jiang, Guozhang
    Chen, Disi
    Ju, Zhaojie
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, : 4041 - 4046
  • [46] An Approach for Gesture Recognition Based on a Lightweight Convolutional Neural Network
    Ravinder, M.
    Malik, Kiran
    Hassaballah, M.
    Tariq, Usman
    Javed, Kashif
    Ghoneimy, Mohamed
    [J]. INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2023, 32 (03)
  • [47] Dynamic gesture recognition method based on convolutional neural network
    Xu, Xiaoyu
    Deng, Lizhen
    Meng, Qingmin
    [J]. 2019 INTERNATIONAL CONFERENCE ON INTERNET OF THINGS (ITHINGS) AND IEEE GREEN COMPUTING AND COMMUNICATIONS (GREENCOM) AND IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING (CPSCOM) AND IEEE SMART DATA (SMARTDATA), 2019, : 389 - 394
  • [48] Radar Gesture Recognition Based on Lightweight Convolutional Neural Network
    Dong, Yaoyao
    Qu, Wei
    Wang, Pengda
    Jiang, Haohao
    Gao, Tianhao
    Shu, Yanhe
    [J]. SEVENTH ASIA PACIFIC CONFERENCE ON OPTICS MANUFACTURE (APCOM 2021), 2022, 12166
  • [49] Gesture recognition based on improved VGGNET convolutional neural network
    Yang Zhiqi
    [J]. PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 1736 - 1739
  • [50] A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network
    Sun, Kaiyun
    Feng, Zhiquan
    Ai, Changsheng
    Li, Yingjun
    Wei, Jun
    Yang, Xiaohui
    Guo, Xiaopei
    [J]. 2017 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2017), 2017, : 272 - 273