Models that learn how humans learn: The case of decision-making and its disorders

被引:0
|
作者
Dezfouli, Amir [1 ,2 ]
Griffiths, Kristi [3 ]
Ramos, Fabio [4 ]
Dayan, Peter [5 ,6 ]
Balleine, Bernard W. [1 ]
机构
[1] School of Psychology, UNSW, Sydney, Australia
[2] Data61, CSIRO, Australia
[3] Westmead Institute for Medical Research, University of Sydney, Sydney, Australia
[4] University of Sydney, Sydney, Australia
[5] Gatsby Computational Neuroscience Unit, UCL, London, United Kingdom
[6] Max Planck Institute for Biological Cybernetics, Tübingen, Germany
来源
PLoS Computational Biology | 2019年 / 15卷 / 06期
基金
澳大利亚国家健康与医学研究理事会;
关键词
Recurrent neural networks - Reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
Popular computational models of decision-making make specific assumptions about learning processes that may cause them to underfit observed behaviours. Here we suggest an alternative method using recurrent neural networks (RNNs) to generate a flexible family of models that have sufficient capacity to represent the complex learning and decision- making strategies used by humans. In this approach, an RNN is trained to predict the next action that a subject will take in a decision-making task and, in this way, learns to imitate the processes underlying subjects' choices and their learning abilities. We demonstrate the benefits of this approach using a new dataset drawn from patients with either unipolar (n = 34) or bipolar (n = 33) depression and matched healthy controls (n = 34) making decisions on a two-armed bandit task. The results indicate that this new approach is better than baseline reinforcement-learning methods in terms of overall performance and its capacity to predict subjects' choices. We show that the model can be interpreted using off-policy simulations and thereby provides a novel clustering of subjects' learning processes-something that often eludes traditional approaches to modelling and behavioural analysis. © 2019 Dezfouli et al.
引用
收藏
相关论文
共 50 条
  • [21] The child is father of the man: how humans learn and why
    Ainley, P
    STUDIES IN HIGHER EDUCATION, 2000, 25 (03) : 361 - 362
  • [22] How do humans learn about the reliability of automation?
    Strickland, Luke
    Farrell, Simon
    Wilson, Micah K.
    Hutchinson, Jack
    Loft, Shayne
    COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS, 2024, 9 (01)
  • [23] How do humans learn about the reliability of automation?
    Luke Strickland
    Simon Farrell
    Micah K. Wilson
    Jack Hutchinson
    Shayne Loft
    Cognitive Research: Principles and Implications, 9
  • [24] Learning to Learn: How to Continuously Teach Humans and Machines
    Singh, Parantak
    Li, You
    Sikarwar, Ankur
    Lei, Weixian
    Gao, Difei
    Talbot, Morgan B.
    Sun, Ying
    Shou, Mike Zheng
    Kreiman, Gabriel
    Zhang, Mengmi
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11674 - 11685
  • [25] MODELS OF DECISION-MAKING
    KAPLAN, MF
    SCHWARTZ, S
    CONTEMPORARY PSYCHOLOGY, 1977, 22 (04): : 342 - 342
  • [26] An algorithmic account for how humans efficiently learn, transfer, and compose hierarchically structured decision policies
    Li, Jing-Jing
    Collins, Anne G. E.
    COGNITION, 2025, 254
  • [27] Computational Models of Effort Based Decision-Making in Mood Disorders
    Treadway, Michael
    NEUROPSYCHOPHARMACOLOGY, 2017, 42 : S57 - S58
  • [28] Decision-making under conditions of uncertainty-what can we learn from palivizumab?
    Burls, Amanda
    Sandercock, Josie
    ACTA PAEDIATRICA, 2011, 100 (10) : 1302 - 1305
  • [29] How Do PDP Models Learn Quasiregularity?
    Kim, Woojae
    Pitt, Mark A.
    Myung, Jay I.
    PSYCHOLOGICAL REVIEW, 2013, 120 (04) : 903 - 916
  • [30] How to use fitness landscape models for the analysis of collective decision-making: a case of theory-transfer and its limitations
    Marks, Peter
    Gerrits, Lasse
    Marx, Johannes
    BIOLOGY & PHILOSOPHY, 2019, 34 (01)