Human bias in AI models? Anchoring effects and mitigation strategies in large language models

被引:0
|
作者
Nguyen, Jeremy K. [1 ]
机构
[1] Swinburne Univ Technol, Dept Accounting Econ & Finance, Hawthorn, Vic 3122, Australia
关键词
Anchoring bias; Artificial intelligence; HEURISTICS;
D O I
10.1016/j.jbef.2024.100971
中图分类号
F8 [财政、金融];
学科分类号
0202 ;
摘要
This study builds on the seminal work of Tversky and Kahneman (1974), exploring the presence and extent of anchoring bias in forecasts generated by four Large Language Models (LLMs): GPT-4, Claude 2, Gemini Pro and GPT-3.5. In contrast to recent findings of advanced reasoning capabilities in LLMs, our randomised controlled trials reveal the presence of anchoring bias across all models: forecasts are significantly influenced by prior mention of high or low values. We examine two mitigation prompting strategies, 'Chain of Thought' and 'ignore previous', finding limited and varying degrees of effectiveness. Our results extend the anchoring bias research in finance beyond human decision-making to encompass LLMs, highlighting the importance of deliberate and informed prompting in AI forecasting in both ad hoc LLM use and in crafting few-shot examples.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Death by AI: Will large language models diminish Wikipedia?
    Wagner, Christian
    Jiang, Ling
    JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2025,
  • [22] Large language models make AI usable for everyone!
    Bause, Fabian
    Konstruktion, 2024, 76 (04): : 3 - 5
  • [23] Large Language Models and Generative AI, Oh My!
    Cobb, Peter J.
    ADVANCES IN ARCHAEOLOGICAL PRACTICE, 2023, 11 (03): : 363 - 369
  • [24] Language and cultural bias in AI: comparing the performance of large language models developed in different countries on Traditional Chinese Medicine highlights the need for localized models
    Lingxuan Zhu
    Weiming Mou
    Yancheng Lai
    Junda Lin
    Peng Luo
    Journal of Translational Medicine, 22
  • [25] Language and cultural bias in AI: comparing the performance of large language models developed in different countries on Traditional Chinese Medicine highlights the need for localized models
    Zhu, Lingxuan
    Mou, Weiming
    Lai, Yancheng
    Lin, Junda
    Luo, Peng
    JOURNAL OF TRANSLATIONAL MEDICINE, 2024, 22 (01)
  • [26] Pipelines for Social Bias Testing of Large Language Models
    Nozza, Debora
    Bianchi, Federico
    Hovy, Dirk
    PROCEEDINGS OF WORKSHOP ON CHALLENGES & PERSPECTIVES IN CREATING LARGE LANGUAGE MODELS (BIGSCIENCE EPISODE #5), 2022, : 68 - 74
  • [27] A Causal View of Entity Bias in (Large) Language Models
    Wang, Fei
    Mo, Wenjie
    Wang, Yiwei
    Zhou, Wenxuan
    Chen, Muhao
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15173 - 15184
  • [28] Cultural bias and cultural alignment of large language models
    Tao, Yan
    Viberg, Olga
    Baker, Ryan S.
    Kizilcec, Rene F.
    PNAS NEXUS, 2024, 3 (09):
  • [29] Locating and Mitigating Gender Bias in Large Language Models
    Cai, Yuchen
    Cao, Ding
    Guo, Rongxi
    Wen, Yaqin
    Liu, Guiquan
    Chen, Enhong
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 471 - 482
  • [30] Homogenization Effects of Large Language Models on Human Creative Ideation
    Anderson, Barrett R.
    Shah, Jash Hemant
    Kreminski, Max
    PROCEEDINGS OF THE 16TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2024, 2024, : 413 - 425