Human bias in AI models? Anchoring effects and mitigation strategies in large language models

被引:0
|
作者
Nguyen, Jeremy K. [1 ]
机构
[1] Swinburne Univ Technol, Dept Accounting Econ & Finance, Hawthorn, Vic 3122, Australia
关键词
Anchoring bias; Artificial intelligence; HEURISTICS;
D O I
10.1016/j.jbef.2024.100971
中图分类号
F8 [财政、金融];
学科分类号
0202 ;
摘要
This study builds on the seminal work of Tversky and Kahneman (1974), exploring the presence and extent of anchoring bias in forecasts generated by four Large Language Models (LLMs): GPT-4, Claude 2, Gemini Pro and GPT-3.5. In contrast to recent findings of advanced reasoning capabilities in LLMs, our randomised controlled trials reveal the presence of anchoring bias across all models: forecasts are significantly influenced by prior mention of high or low values. We examine two mitigation prompting strategies, 'Chain of Thought' and 'ignore previous', finding limited and varying degrees of effectiveness. Our results extend the anchoring bias research in finance beyond human decision-making to encompass LLMs, highlighting the importance of deliberate and informed prompting in AI forecasting in both ad hoc LLM use and in crafting few-shot examples.
引用
收藏
页数:8
相关论文
共 50 条