Machine Learning Model Comparisons of User Independent & Dependent Intent Recognition Systems for Powered Prostheses

被引:31
|
作者
Bhakta, Krishan [1 ,2 ]
Camargo, Jonathan [1 ,2 ,3 ]
Donovan, Luke [1 ,4 ]
Herrin, Kinsey [1 ,2 ,3 ]
Young, Aaron [1 ,2 ,3 ]
机构
[1] Exoskeleton & Intelligent Controls EPIC Lab, Atlanta, GA 30332 USA
[2] Georgia Inst Technol, Woodruff Sch Mech Engn, Atlanta, GA 30332 USA
[3] Georgia Inst Technol, Inst Robot & Intelligent Machines, Atlanta, GA 30332 USA
[4] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
关键词
Prosthetics and exoskeletons; wearable robots; human performance augmentation; mode classification; transfemoral amputation; TRANSFEMORAL AMPUTEES; CLASSIFICATION METHOD; FEATURE-EXTRACTION; INTACT LIMB; AMBULATION; WALKING; GAIT; LEG; DESIGN;
D O I
10.1109/LRA.2020.3007480
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Developing intelligent prosthetic controllers to recognize user intent across users is a challenge. Machine learning algorithms present an opportunity to develop methods for predicting user's locomotion mode. Currently, linear discriminant analysis (LDA) offers the standard solution in the state-of-the-art for subject dependent models and has been used in the development of subject independent applications. However, the performance of subject independent models differ radically from their dependent counterpart. Furthermore, most of the studies limit the evaluation to a fixed terrain with individual stair height and ramp inclination. In this study, we investigated the use of the XGBoost algorithm for developing a subject independent model across 8 individuals with transfemoral amputation. We evaluated the performance of XGBoost across different stair heights and inclination angles and found that it generalizes well across preset conditions. Our findings suggest that XGBoost offers a potential benefit for both subject independent and subject dependent algorithms outperforming LDA and NN (DEP SS Error: 2.93% +/- 0.49%, DEP TS Error: 7.03% +/- 0.74%, IND SS Error: 10.12% +/- 3.16%, and IND TS Error: 15.78% +/- 2.39%)(p < 0.05). We were also able to show that with the inclusion of extra sensors the model performance could continually be improved in both user dependent and independent models (p < 0.05). Our study provides valuable information for future intent recognition systems to make them more reliable across different users and common community ambulation modes.
引用
收藏
页码:5393 / 5400
页数:8
相关论文
共 50 条
  • [21] Machine Learning Assisted Raman in Optofluidics for User-Independent Biofluid Diagnostics
    Storey, Emily E.
    Wu, Duxuan X.
    Helmy, Amr S.
    2019 CONFERENCE ON LASERS AND ELECTRO-OPTICS (CLEO), 2019,
  • [22] Understanding User Sensemaking in Machine Learning Fairness Assessment Systems
    Gu, Ziwei
    Yan, Jing Nathan
    Rzeszotarski, Jeffrey M.
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 658 - 668
  • [23] Recognition of user-dependent and independent static hand gestures: Application to sign language*
    Sadeddine, Khadidja
    Chelali, Zohra Fatma
    Djeradi, Rachida
    Djeradi, Amar
    Benabderrahmane, Sidahmed
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 79
  • [24] Machine learning-based self-powered acoustic sensor for speaker recognition
    Han, Jae Hyun
    Bae, Kang Min
    Hong, Seong Kwang
    Park, Hyunsin
    Kwak, Jun-Hyuk
    Wang, Hee Seung
    Joe, Daniel Juhyung
    Park, Jung Hwan
    Jung, Young Hoon
    Hur, Shin
    Yoo, Chang D.
    Lee, Keon Jae
    NANO ENERGY, 2018, 53 : 658 - 665
  • [25] Keystroke User Recognition through Extreme Learning Machine and Evolving Cluster Method
    Ravindran, Sriram
    Gautam, Chandan
    Tiwari, Aruna
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH (ICCIC), 2015, : 568 - 572
  • [26] A user-adaptive deep machine learning method for handwritten digit recognition
    Zhang, Huijie
    Wang, Qiyu
    Luo, Xin
    Yin, Yufang
    Chen, Yingsong
    Cui, Zhouping
    Zhou, Quan
    PROCEEDINGS OF THE 2018 1ST IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE INNOVATION AND INVENTION (ICKII 2018), 2018, : 108 - 111
  • [27] Implementation of Smartwatch User Interface Using Machine Learning based Motion Recognition
    Lee, Kyung-Taek
    Yoon, Hyoseok
    Lee, Youn-Sung
    2018 32ND INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN), 2018, : 807 - 809
  • [28] MOBILE USER ENGLISH LEARNING PATTERN RECOGNITION MODEL BASED ON INTEGRATED LEARNING
    ZHANG Q.I.A.N.
    REN Y.
    Scalable Computing, 2024, 25 (04): : 2371 - 2384
  • [29] Named entity recognition based on a machine learning model
    Wang, Jing
    Liu, Zhijing
    Zhao, Hui
    Research Journal of Applied Sciences, Engineering and Technology, 2012, 4 (20) : 3973 - 3980
  • [30] Fast Image Recognition Based on Independent Component Analysis and Extreme Learning Machine
    Shujing Zhang
    Bo He
    Rui Nian
    Jing Wang
    Bo Han
    Amaury Lendasse
    Guang Yuan
    Cognitive Computation, 2014, 6 : 405 - 422