The role of training variability for model-based and model-free learning of an arbitrary visuomotor mapping

被引:0
|
作者
Velazquez-Vargas, Carlos A. [1 ]
Daw, Nathaniel D. [1 ,2 ]
Taylor, Jordan A. [1 ,2 ]
机构
[1] Princeton Univ, Dept Psychol, Princeton, NJ 08544 USA
[2] Princeton Univ, Princeton Neurosci Inst, Princeton, NJ USA
基金
美国国家卫生研究院;
关键词
SENSORY PREDICTION; SCHEMA THEORY; MOTOR; MOVEMENT; DYNAMICS; ADAPTATION; EXPLICIT; IMPLICIT; REPRESENTATIONS; ACQUISITION;
D O I
10.1371/journal.pcbi.1012471
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
A fundamental feature of the human brain is its capacity to learn novel motor skills. This capacity requires the formation of vastly different visuomotor mappings. Using a grid navigation task, we investigated whether training variability would enhance the flexible use of a visuomotor mapping (key-to-direction rule), leading to better generalization performance. Experiments 1 and 2 show that participants trained to move between multiple start-target pairs exhibited greater generalization to both distal and proximal targets compared to participants trained to move between a single pair. This finding suggests that limited variability can impair decisions even in simple tasks without planning. In addition, during the training phase, participants exposed to higher variability were more inclined to choose options that, counterintuitively, moved the cursor away from the target while minimizing its actual distance under the constrained mapping, suggesting a greater engagement in model-based computations. In Experiments 3 and 4, we showed that the limited generalization performance in participants trained with a single pair can be enhanced by a short period of variability introduced early in learning or by incorporating stochasticity into the visuomotor mapping. Our computational modeling analyses revealed that a hybrid model between model-free and model-based computations with different mixing weights for the training and generalization phases, best described participants' data. Importantly, the differences in the model-based weights between our experimental groups, paralleled the behavioral findings during training and generalization. Taken together, our results suggest that training variability enables the flexible use of the visuomotor mapping, potentially by preventing the consolidation of habits due to the continuous demand to change responses. The development of new motor skills often requires the learning of novel associations between actions and outcomes. These novel mappings can be flexible and generalize to new situations, or more local with narrow generalization, similar to stimulus-action associations. In a series of experiments using a navigation task, we showed that generalizable mappings are favored under a training variability regime, while local mappings with narrow generalization are developed in the absence of variability. Training variability was generated in our experiments either with multiple goals or with stochasticity in the action-outcome mapping, with both regimes leading to successful generalization. In addition, we showed that the benefits in generalization from training variability can be observed even when participants are subsequently exposed to no variability for a prolonged period of time. These results were best described by a mixture of model-free and model-based reinforcement learning algorithms, with different mixture weights for the training and generalization phases.
引用
收藏
页数:43
相关论文
共 50 条
  • [21] Parallel model-based and model-free reinforcement learning for card sorting performance
    Steinke, Alexander
    Lange, Florian
    Kopp, Bruno
    SCIENTIFIC REPORTS, 2020, 10 (01)
  • [22] Model-free and model-based learning processes in the updating of explicit and implicit evaluations
    Kurdi, Benedek
    Gershman, Samuel J.
    Banaji, Mahzarin R.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (13) : 6035 - 6044
  • [23] Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning
    Lehnert, Lucas
    Littman, Michael L.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [24] Discovering Implied Serial Order Through Model-Free and Model-Based Learning
    Jensen, Greg
    Terrace, Herbert S.
    Ferrera, Vincent P.
    FRONTIERS IN NEUROSCIENCE, 2019, 13
  • [25] Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation
    Peter Dayan
    Kent C. Berridge
    Cognitive, Affective, & Behavioral Neuroscience, 2014, 14 : 473 - 492
  • [26] Neural Computations Underlying Arbitration between Model-Based and Model-free Learning
    Lee, Sang Wan
    Shimojo, Shinsuke
    O'Doherty, John P.
    NEURON, 2014, 81 (03) : 687 - 699
  • [27] Successor features combine elements of model-free and model-based reinforcement learning
    Lehnert, Lucas
    Littman, Michael L.
    1600, Microtome Publishing (21):
  • [28] Multifidelity Reinforcement Learning With Gaussian Processes: Model-Based and Model-Free Algorithms
    Suryan, Varun
    Gondhalekar, Nahush
    Tokekar, Pratap
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2020, 27 (02) : 117 - 128
  • [29] Model-based and model-free Pavlovian reward learning: Revaluation, revision, and revelation
    Dayan, Peter
    Berridge, Kent C.
    COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE, 2014, 14 (02) : 473 - 492
  • [30] Parallel model-based and model-free reinforcement learning for card sorting performance
    Alexander Steinke
    Florian Lange
    Bruno Kopp
    Scientific Reports, 10