Sparse Identification of Motor Learning Using Proxy Process Models

被引:0
|
作者
Parmar, Pritesh N. [1 ,2 ]
Patton, James L. [1 ,2 ]
机构
[1] Univ Illinois, Richard & Loan Hill Dept Bioengn, Chicago, IL 60607 USA
[2] Shirley Ryan Abil Lab, Chicago, IL 60611 USA
基金
美国国家卫生研究院;
关键词
ERROR; ADAPTATION; STROKE; DYNAMICS; RECOVERY; ARM; TRANSFORMATION; COORDINATION; INDIVIDUALS; MOVEMENTS;
D O I
10.1109/icorr.2019.8779423
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Enhanced neurorehabilitation using robotic and virtual-reality technologies requires a computational framework that can readily assess the time course of motor learning in order to recommend optimal training conditions. Error-feedback plays an important role in the acquisition of motor skills for goal-directed movements by facilitating the learning of internal models. In this study, we investigated changes in movement errors during sparse and intermittent "catch" (no-vision) trials, which served as a "proxy" of the underlying process of internal model formations. We trained 15 healthy subjects to reach for visual targets under eight distinct visuomotor distortions, and we removed visual feedback (no vision) intermittently. We tested their learning data from novision trials against our so-called proxy process models, which assumed linear, affine, and second-order model structures. In order to handle sparse (no-vision) observations, we allowed the proxy process models to either update trial-to-trial, predicting across voids of sparse samples or update sample-to-sample, disregarding the trial gaps. We exhaustively cross-validated our models across subjects and across learning tasks. The results revealed that the second-order model with trial-to-trial update best predicted the proxy process of visuomotor learning.
引用
收藏
页码:855 / 860
页数:6
相关论文
共 50 条
  • [1] Learning gas distribution models using sparse Gaussian process mixtures
    Cyrill Stachniss
    Christian Plagemann
    Achim J. Lilienthal
    Autonomous Robots, 2009, 26 : 187 - 202
  • [2] Learning gas distribution models using sparse Gaussian process mixtures
    Stachniss, Cyrill
    Plagemann, Christian
    Lilienthal, Achim J.
    AUTONOMOUS ROBOTS, 2009, 26 (2-3) : 187 - 202
  • [3] Identification of nonlinear sparse networks using sparse Bayesian learning
    Jin, Junyang
    Yuan, Ye
    Pan, Wei
    Tomlin, Claire
    Webb, Alex A.
    Goncalves, Jorge
    2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2017,
  • [4] Learning Sparse Representation Using Iterative Subspace Identification
    Gowreesunker, B. Vikrham
    Tewfik, Ahmed H.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2010, 58 (06) : 3055 - 3065
  • [5] Process identification using polynomial models
    Ying, CM
    Joseph, B
    PROCEEDINGS OF THE 1998 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 1998, : 1245 - 1249
  • [6] Sparse identification of posynomial models
    Calafiore, Giuseppe C.
    El Ghaoui, Laurent M.
    Novara, Carlo
    AUTOMATICA, 2015, 59 : 27 - 34
  • [7] Estimating Socioeconomic Proxy Variables Using Multimodal Deep Learning Models
    Bai, Yanbing
    Zhu, Zelan
    Su, Huixue
    Liu, Xiao
    Li, Liangzhi
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XIII, ICIC 2024, 2024, 14874 : 417 - 429
  • [8] Scalable Thompson Sampling using Sparse Gaussian Process Models
    Vakili, Sattar
    Moss, Henry
    Artemev, Artem
    Dutordoir, Vincent
    Picheny, Victor
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] How to combine sparse proxy data and coupled climate models
    Paul, A
    Schäfer-Neth, C
    QUATERNARY SCIENCE REVIEWS, 2005, 24 (7-9) : 1095 - 1107
  • [10] LEARNING-PROCESS PROBABILISTIC MODELS IN IDENTIFICATION PROBLEMS
    DRYNKOV, AV
    PSIKHOLOGICHESKII ZHURNAL, 1983, 4 (03) : 102 - 107