Detecting implicit biases of large language models with Bayesian hypothesis testingDetecting Implicit Biases of Large Language Models...S. Si et al.

被引:0
|
作者
Shijing Si [1 ]
Xiaoming Jiang [2 ]
Qinliang Su [6 ]
Lawrence Carin [3 ]
机构
[1] Shanghai International Studies University,School of Economics and Finance
[2] Shanghai International Studies University,Institute of Language Sciences
[3] Sun Yat-sen University,School of Computer Science and Engineering
[4] Guangdong Key Laboratory of Big Data Analysis and Processing,Department of Electronic and Computer Engineering
[5] Duke University, Key Laboratory of Language Sciences and Multilingual Intelligence Applications
[6] Shanghai International Studies University,undefined
关键词
Large language models; Group bias; Fairness; Bayes factor;
D O I
10.1038/s41598-025-95825-x
中图分类号
学科分类号
摘要
Despite the remarkable performance of large language models (LLMs), such as generative pre-trained Transformers (GPTs), across various tasks, they often perpetuate social biases and stereotypes embedded in their training data. In this paper, we introduce a novel framework that reformulates bias detection in LLMs as a hypothesis testing problem, where the null hypothesis \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_0$$\end{document} represents the absence of implicit bias. Our framework leverages binary-choice questions to measure social bias in both open-source and proprietary LLMs accessible via APIs. We demonstrate the flexibility of our approach by integrating classical statistical methods, such as the exact binomial test, with Bayesian inference using Bayes factors for bias detection and quantification. Extensive experiments are conducted on prominent models, including ChatGPT (GPT-3.5-Turbo), DeepSeek-V3, and Llama-3.1-70B, utilizing publicly available datasets such as BBQ, CrowS-Pairs (in both English and French), and Winogender. While the exact Binomial test fails to distinguish between no evidence of bias and evidence of no bias, our results underscore the advantages of Bayes factors, particularly their capacity to quantify evidence for both competing hypotheses and their robustness to small sample size. Additionally, our experiments reveal that the bias behavior of LLMs is largely consistent across the English and French versions of the CrowS-Pairs dataset, with subtle differences likely arising from variations in social norms across linguistic and cultural contexts.
引用
收藏
相关论文
共 50 条