Reference-point centering and range-adaptation enhance human reinforcement learning at the cost of irrational preferences

被引:0
|
作者
Sophie Bavard
Maël Lebreton
Mehdi Khamassi
Giorgio Coricelli
Stefano Palminteri
机构
[1] Institut National de la Santé et Recherche Médicale,Laboratoire de Neurosciences Cognitives Computationnelles
[2] Ecole Normale Supérieure,Département d’Etudes Cognitives
[3] Université de Paris Sciences et Lettres,Institut d’Etudes de la Cognition
[4] University of Amsterdam,CREED lab, Amsterdam School of Economics, Faculty of Business and Economics
[5] University of Amsterdam,Amsterdam Brain and Cognition
[6] University of Geneva,Swiss Centre for Affective Sciences
[7] Centre National de la Recherche Scientifique,Institut des Systèmes Intelligents et Robotiques
[8] Sorbonne Universités,Institut des Sciences de l’Information et de leurs Interactions
[9] University of Southern California,Department of Economics
[10] Università di Trento,Centro Mente e Cervello
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
In economics and perceptual decision-making contextual effects are well documented, where decision weights are adjusted as a function of the distribution of stimuli. Yet, in reinforcement learning literature whether and how contextual information pertaining to decision states is integrated in learning algorithms has received comparably little attention. Here, we investigate reinforcement learning behavior and its computational substrates in a task where we orthogonally manipulate outcome valence and magnitude, resulting in systematic variations in state-values. Model comparison indicates that subjects’ behavior is best accounted for by an algorithm which includes both reference point-dependence and range-adaptation—two crucial features of state-dependent valuation. In addition, we find that state-dependent outcome valuation progressively emerges, is favored by increasing outcome information and correlated with explicit understanding of the task structure. Finally, our data clearly show that, while being locally adaptive (for instance in negative valence and small magnitude contexts), state-dependent valuation comes at the cost of seemingly irrational choices, when options are extrapolated out from their original contexts.
引用
收藏
相关论文
共 3 条
  • [1] Reference-point centering and range-adaptation enhance human reinforcement learning at the cost of irrational preferences
    Bavard, Sophie
    Lebreton, Mael
    Khamassi, Mehdi
    Coricelli, Giorgio
    Palminteri, Stefano
    NATURE COMMUNICATIONS, 2018, 9
  • [2] Recent Opioid Use Impedes Range Adaptation in Reinforcement Learning in Human Addiction
    Gueguen, Maelle C. M.
    Anllo, Hernan
    Bonagura, Darla
    Kong, Julia
    Hafezi, Sahar
    Palminteri, Stefano
    Konova, Anna B.
    BIOLOGICAL PSYCHIATRY, 2024, 95 (10) : 974 - 984
  • [3] Two sides of the same coin: Beneficial and detrimental consequences of range adaptation in human reinforcement learning
    Bavard, Sophie
    Rustichini, Aldo
    Palminteri, Stefano
    SCIENCE ADVANCES, 2021, 7 (14):