Adaptation and learning as strategies to maximize reward in neurofeedback tasks

被引:0
|
作者
Osuna-Orozco, Rodrigo [1 ]
Zhao, Yi [1 ]
Stealey, Hannah Marie [1 ]
Lu, Hung-Yun [1 ]
Contreras-Hernandez, Enrique [1 ]
Santacruz, Samantha Rose [1 ,2 ,3 ]
机构
[1] Univ Texas Austin, Dept Biomed Engn, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[3] Univ Texas Austin, Inst Neurosci, Austin, TX 78712 USA
来源
基金
美国国家科学基金会;
关键词
brain-computer interface; neural manifold; reinforcement learning; neurofeedback; adaptation; dimensionality reduction; BRAIN-COMPUTER INTERFACE;
D O I
10.3389/fnhum.2024.1368115
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Introduction Adaptation and learning have been observed to contribute to the acquisition of new motor skills and are used as strategies to cope with changing environments. However, it is hard to determine the relative contribution of each when executing goal directed motor tasks. This study explores the dynamics of neural activity during a center-out reaching task with continuous visual feedback under the influence of rotational perturbations.Methods Results for a brain-computer interface (BCI) task performed by two non-human primate (NHP) subjects are compared to simulations from a reinforcement learning agent performing an analogous task. We characterized baseline activity and compared it to the activity after rotational perturbations of different magnitudes were introduced. We employed principal component analysis (PCA) to analyze the spiking activity driving the cursor in the NHP BCI task as well as the activation of the neural network of the reinforcement learning agent.Results and discussion Our analyses reveal that both for the NHPs and the reinforcement learning agent, the task-relevant neural manifold is isomorphic with the task. However, for the NHPs the manifold is largely preserved for all rotational perturbations explored and adaptation of neural activity occurs within this manifold as rotations are compensated by reassignment of regions of the neural space in an angular pattern that cancels said rotations. In contrast, retraining the reinforcement learning agent to reach the targets after rotation results in substantial modifications of the underlying neural manifold. Our findings demonstrate that NHPs adapt their existing neural dynamic repertoire in a quantitatively precise manner to account for perturbations of different magnitudes and they do so in a way that obviates the need for extensive learning.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Learning Optimal Adaptation Strategies in Unpredictable Motor Tasks
    Braun, Daniel A.
    Aertsen, Ad
    Wolpert, Daniel M.
    Mehring, Carsten
    [J]. JOURNAL OF NEUROSCIENCE, 2009, 29 (20): : 6472 - 6478
  • [2] Reward Adaptation and the Mechanisms of Learning: Contrast Changes Reward Value in Rats and Drives Learning
    Dwyer, Dominic Michael
    Figueroa, Jaime
    Gasalla, Patricia
    Lopez, Matias
    [J]. PSYCHOLOGICAL SCIENCE, 2018, 29 (02) : 219 - 227
  • [3] Maximizing the average reward in episodic reinforcement learning tasks
    Reinke, Chris
    Uchibe, Eiji
    Doya, Kenji
    [J]. 2015 INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATICS AND BIOMEDICAL SCIENCES (ICIIBMS), 2015, : 420 - 421
  • [4] Learning Contextual Reward Expectations for Value Adaptation
    Rigoli, Francesco
    Chew, Benjamin
    Dayan, Peter
    Dolan, Raymond J.
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2018, 30 (01) : 50 - 69
  • [5] Learning of Exception Strategies in Assembly Tasks
    Nemec, Bojan
    Simonic, Mihael
    Ude, Ales
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 6521 - 6527
  • [6] Adaptation in Nonlinear Learning Models for Nonstationary Tasks
    Konen, Wolfgang
    Koch, Patrick
    [J]. PARALLEL PROBLEM SOLVING FROM NATURE - PPSN XIII, 2014, 8672 : 292 - 301
  • [7] A reward allocation method for reinforcement learning in stabilizing control tasks
    Hosokawa, Shu
    Kato, Joji
    Nakano, Kazushi
    [J]. ARTIFICIAL LIFE AND ROBOTICS, 2014, 19 (02) : 109 - 114
  • [8] Learning Reward Machines in Cooperative Multi-agent Tasks
    Ardon, Leo
    Furelos-Blanco, Daniel
    Russo, Alessandra
    [J]. AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS. BEST AND VISIONARY PAPERS, AAMAS 2023 WORKSHOPS, 2024, 14456 : 43 - 59
  • [9] A Reward Allocation Method for Reinforcement Learning in Stabilizing Control Tasks
    Hosokawa, Shu
    Kato, Joji
    Nakano, Kazushi
    [J]. PROCEEDINGS OF THE SEVENTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 17TH '12), 2012, : 582 - 585
  • [10] Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks
    Guo, Yijie
    Wu, Qiucheng
    Lee, Honglak
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6792 - 6800