The gradient of the reinforcement landscape influences sensorimotor learning

被引:32
|
作者
Cashaback, Joshua G. A. [1 ,2 ]
Lao, Christopher K. [3 ]
Palidis, Dimitrios J. [4 ,5 ,6 ]
Coltman, Susan K. [4 ,5 ,6 ]
McGregor, Heather R. [4 ,5 ,6 ]
Gribble, Paul L. [3 ,5 ,6 ,7 ]
机构
[1] Univ Calgary, Human Performance Lab, Calgary, AB, Canada
[2] Univ Calgary, Hotchkiss Brain Inst, Calgary, AB, Canada
[3] Western Univ, Dept Physiol & Pharmacol, London, ON, Canada
[4] Western Univ, Grad Program Neurosci, London, ON, Canada
[5] Western Univ, Brain & Mind Inst, London, ON, Canada
[6] Western Univ, Dept Psychol, London, ON, Canada
[7] Haskins Labs Inc, New Haven, CT 06511 USA
基金
加拿大自然科学与工程研究理事会;
关键词
TASK-IRRELEVANT; DECISION-THEORY; MOTOR; ADAPTATION; MOVEMENT; VARIABILITY; REWARD; REPRESENTATION; MEMORY;
D O I
10.1371/journal.pcbi.1006839
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Consideration of previous successes and failures is essential to mastering a motor skill. Much of what we know about how humans and animals learn from such reinforcement feedback comes from experiments that involve sampling from a small number of discrete actions. Yet, it is less understood how we learn through reinforcement feedback when sampling from a continuous set of possible actions. Navigating a continuous set of possible actions likely requires using gradient information to maximize success. Here we addressed how humans adapt the aim of their hand when experiencing reinforcement feedback that was associated with a continuous set of possible actions. Specifically, we manipulated the change in the probability of reward given a change in motor actionthe reinforcement gradientto study its influence on learning. We found that participants learned faster when exposed to a steep gradient compared to a shallow gradient. Further, when initially positioned between a steep and a shallow gradient that rose in opposite directions, participants were more likely to ascend the steep gradient. We introduce a model that captures our results and several features of motor learning. Taken together, our work suggests that the sensorimotor system relies on temporally recent and spatially local gradient information to drive learning. Author summary In recent years it has been shown that reinforcement feedback may also subserve our ability to acquire new motor skills. Here we address how the reinforcement gradient influences motor learning. We found that a steeper gradient increased both the rate and likelihood of learning. Moreover, while many mainstream theories posit that we build a full representation of the reinforcement landscape, both our data and model suggest that the sensorimotor system relies primarily on temporally recent and spatially local gradient information to drive learning. Our work provides new insights into how we sample from a continuous action-reward landscape to maximize success.
引用
收藏
页数:27
相关论文
共 50 条
  • [41] End-to-end sensorimotor control problems of AUVs with deep reinforcement learning
    Wu, Hui
    Song, Shiji
    Hsu, Yachu
    You, Keyou
    Wu, Cheng
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 5869 - 5874
  • [42] Reinforcement learning of self-regulated sensorimotor β-oscillations improves motor performance
    Naros, G.
    Naros, I.
    Grimm, F.
    Ziemann, U.
    Gharabaghi, A.
    NEUROIMAGE, 2016, 134 : 142 - 152
  • [43] Circadian influences on sensorimotor control
    Jasper, Isabelle
    Haeussler, Andreas
    Hermsdoerfer, Joachim
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2008, 43 (3-4) : 712 - 712
  • [44] Direction of TDCS current flow in human sensorimotor cortex influences behavioural learning
    Hannah, Ricci
    Iacovou, Anna
    Rothwell, John C.
    BRAIN STIMULATION, 2019, 12 (03) : 684 - 692
  • [45] INTERHEMISPHERIC INFLUENCES ON SENSORIMOTOR NEURONS
    TYNER, CF
    TOWE, AL
    EXPERIMENTAL NEUROLOGY, 1970, 28 (01) : 88 - &
  • [46] Reinforcement Learning Using a Stochastic Gradient Method with Memory-Based Learning
    Yamada, Takafumi
    Yamaguchi, Satoshi
    ELECTRICAL ENGINEERING IN JAPAN, 2010, 173 (01) : 32 - 40
  • [47] Molecule generation using transformers and policy gradient reinforcement learning
    Mazuz, Eyal
    Shtar, Guy
    Shapira, Bracha
    Rokach, Lior
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [48] Reinforcement learning for continuous action using stochastic gradient ascent
    Kimura, H
    Kobayashi, S
    INTELLIGENT AUTONOMOUS SYSTEMS: IAS-5, 1998, : 288 - 295
  • [49] Meta-Gradient Reinforcement Learning with an Objective Discovered Online
    Xu, Zhongwen
    van Hasselt, Hado
    Hessel, Matteo
    Oh, Junhyuk
    Singh, Satinder
    Silver, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [50] Derivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning
    Morimura, Tetsuro
    Uchibe, Eiji
    Yoshimoto, Junichiro
    Peters, Jan
    Doya, Kenji
    NEURAL COMPUTATION, 2010, 22 (02) : 342 - 376