Parallel model-based and model-free reinforcement learning for card sorting performance

被引:0
|
作者
Alexander Steinke
Florian Lange
Bruno Kopp
机构
[1] Hannover Medical School,Department of Neurology
[2] KU Leuven,Behavioral Engineering Research Group
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
The Wisconsin Card Sorting Test (WCST) is considered a gold standard for the assessment of cognitive flexibility. On the WCST, repeating a sorting category following negative feedback is typically treated as indicating reduced cognitive flexibility. Therefore such responses are referred to as ‘perseveration’ errors. Recent research suggests that the propensity for perseveration errors is modulated by response demands: They occur less frequently when their commitment repeats the previously executed response. Here, we propose parallel reinforcement-learning models of card sorting performance, which assume that card sorting performance can be conceptualized as resulting from model-free reinforcement learning at the level of responses that occurs in parallel with model-based reinforcement learning at the categorical level. We compared parallel reinforcement-learning models with purely model-based reinforcement learning, and with the state-of-the-art attentional-updating model. We analyzed data from 375 participants who completed a computerized WCST. Parallel reinforcement-learning models showed best predictive accuracies for the majority of participants. Only parallel reinforcement-learning models accounted for the modulation of perseveration propensity by response demands. In conclusion, parallel reinforcement-learning models provide a new theoretical perspective on card sorting and it offers a suitable framework for discerning individual differences in latent processes that subserve behavioral flexibility.
引用
收藏
相关论文
共 50 条
  • [41] Comparison of Model-Based and Model-Free Reinforcement Learning for Real-World Dexterous Robotic Manipulation Tasks
    Valencia, David
    Jia, John
    Li, Raymond
    Hayashi, Alex
    Lecchi, Megan
    Terezakis, Reuel
    Gee, Trevor
    Liarokapis, Minas
    MacDonald, Bruce A.
    Williams, Henry
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 871 - 878
  • [42] Model-free reinforcement learning with model-based safe exploration: Optimizing adaptive recovery process of infrastructure systems
    Memarzadeh, Milad
    Pozzi, Matteo
    [J]. STRUCTURAL SAFETY, 2019, 80 : 46 - 55
  • [43] Model-Free Versus Model-Based Methods
    不详
    [J]. IEEE CONTROL SYSTEMS MAGAZINE, 2023, 43 (05): : 40 - 40
  • [44] Model-free, Model-based, and General Intelligence
    Geffner, Hector
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 10 - 17
  • [45] Model-Free Control for Soft Manipulators based on Reinforcement Learning
    You, Xuanke
    Zhang, Yixiao
    Chen, Xiaotong
    Liu, Xinghua
    Wang, Zhanchi
    Jiang, Hao
    Chen, Xiaoping
    [J]. 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 2909 - 2915
  • [46] Model-Free Emergency Frequency Control Based on Reinforcement Learning
    Chen, Chunyu
    Cui, Mingjian
    Li, Fangxing
    Yin, Shengfei
    Wang, Xinan
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (04) : 2336 - 2346
  • [47] Model-free Control for Stratospheric Airship Based on Reinforcement Learning
    Nie, Chunyu
    Zhu, Ming
    Zheng, Zewei
    Wu, Zhe
    [J]. PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE 2016, 2016, : 10702 - 10707
  • [48] Free Will Belief as a Consequence of Model-Based Reinforcement Learning
    Rehn, Erik M.
    [J]. ARTIFICIAL GENERAL INTELLIGENCE, AGI 2022, 2023, 13539 : 353 - 363
  • [49] Model-Free Trajectory Optimization for Reinforcement Learning
    Akrour, Riad
    Abdolmaleki, Abbas
    Abdulsamad, Hany
    Neumann, Gerhard
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [50] Model-Free Quantum Control with Reinforcement Learning
    Sivak, V. V.
    Eickbusch, A.
    Liu, H.
    Royer, B.
    Tsioutsios, I
    Devoret, M. H.
    [J]. PHYSICAL REVIEW X, 2022, 12 (01)