Human bias in AI models? Anchoring effects and mitigation strategies in large language models

被引:0
|
作者
Nguyen, Jeremy K. [1 ]
机构
[1] Swinburne Univ Technol, Dept Accounting Econ & Finance, Hawthorn, Vic 3122, Australia
关键词
Anchoring bias; Artificial intelligence; HEURISTICS;
D O I
10.1016/j.jbef.2024.100971
中图分类号
F8 [财政、金融];
学科分类号
0202 ;
摘要
This study builds on the seminal work of Tversky and Kahneman (1974), exploring the presence and extent of anchoring bias in forecasts generated by four Large Language Models (LLMs): GPT-4, Claude 2, Gemini Pro and GPT-3.5. In contrast to recent findings of advanced reasoning capabilities in LLMs, our randomised controlled trials reveal the presence of anchoring bias across all models: forecasts are significantly influenced by prior mention of high or low values. We examine two mitigation prompting strategies, 'Chain of Thought' and 'ignore previous', finding limited and varying degrees of effectiveness. Our results extend the anchoring bias research in finance beyond human decision-making to encompass LLMs, highlighting the importance of deliberate and informed prompting in AI forecasting in both ad hoc LLM use and in crafting few-shot examples.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] A survey on multilingual large language models: corpora, alignment, and bias
    Xu, Yuemei
    Hu, Ling
    Zhao, Jiayi
    Qiu, Zihan
    Xu, Kexin
    Ye, Yuqi
    Gu, Hanwen
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (11)
  • [42] Evaluating and Mitigating Gender Bias in Generative Large Language Models
    Zhou, H.
    Inkpen, D.
    Kantarci, B.
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2024, 19 (06)
  • [43] Persistent Anti-Muslim Bias in Large Language Models
    Abid, Abubakar
    Farooqi, Maheen
    Zou, James
    AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 298 - 306
  • [44] ROBBIE: Robust Bias Evaluation of Large Generative Language Models
    Esiobu, David
    Tan, Xiaoqing
    Hosseini, Saghar
    Ung, Megan
    Zhang, Yuchen
    Fernandes, Jude
    Dwivedi-Yu, Jane
    Presani, Eleonora
    Williams, Adina
    Meta, Eric Michael Smith
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3764 - 3814
  • [45] Quantifying Bias in Agentic Large Language Models: A Benchmarking Approach
    Fernando, Riya
    Norton, Isabel
    Dogra, Pranay
    Sarnaik, Rohit
    Wazir, Hasan
    Ren, Zitang
    Gunda, Niveta Sree
    Mukhopadhyay, Anushka
    Lutz, Michael
    2024 5TH INFORMATION COMMUNICATION TECHNOLOGIES CONFERENCE, ICTC 2024, 2024, : 349 - 353
  • [46] PopBlends: Strategies for Conceptual Blending with Large Language Models
    Wang, Sitong
    Petridis, Savvas
    Kwon, Taeahn
    Ma, Xiaojuan
    Chilton, Lydia B.
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023, 2023,
  • [47] A Comprehensive Evaluation of Quantization Strategies for Large Language Models
    Jin, Renren
    Du, Jiangcun
    Huang, Wuwei
    Liu, Wei
    Lu, Jian
    Wang, Bin
    Xiong, Deyi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 12186 - 12215
  • [48] LARGE LANGUAGE MODELS FOR RISK OF BIAS ASSESSMENT: A CASE STUDY
    Edwards, M.
    Bishop, E.
    Reddish, K.
    Carr, E.
    di Ruffano, L. Ferrante
    VALUE IN HEALTH, 2024, 27 (12)
  • [49] Strategies for Training Large Vocabulary Neural Language Models
    Chen, Wenlin
    Grangier, David
    Auli, Michael
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 1975 - 1985
  • [50] Enhancing Fairness in Financial AI Models through Constraint-Based Bias Mitigation
    Choi, Yiseul
    Hong, Jiwon
    Lee, Eunbeen
    Kim, Junga
    Kim, Seongmin
    JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2025, 21 (01): : 89 - 101