Algorithmic gender bias: investigating perceptions of discrimination in automated decision-making

被引:0
|
作者
Kim, Soojong [1 ,2 ,5 ]
Oh, Poong [3 ]
Lee, Joomi [4 ]
机构
[1] Univ Calif Davis, Dept Commun, Davis, CA USA
[2] Stanford Univ, Stanford Ctr Philanthropy & Civil Soc, Stanford, CA USA
[3] Nanyang Technol Univ, Wee Kim Wee Sch Commun & Informat, Singapore, Singapore
[4] Univ Georgia, Dept Advertising & Publ Relat, Athens, GA USA
[5] Univ Calif Davis, Dept Commun, 361 Kerr Hall, Davis, CA 95616 USA
关键词
Automated decision-making; artificial intelligence; gender; identity; bias; ARTIFICIAL-INTELLIGENCE; UNITED-STATES; SELF; ATTRIBUTIONS; FAIRNESS; EQUALITY; IMPACT; TRUST; MODEL;
D O I
10.1080/0144929X.2024.2306484
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the widespread use of artificial intelligence and automated decision-making (ADM), concerns are increasing about automated decisions biased against certain social groups, such as women and racial minorities. The public's skepticism and the danger of algorithmic discrimination are widely acknowledged, yet the role of key factors constituting the context of discriminatory situations is underexplored. This study examined people's perceptions of gender bias in ADM, focusing on three factors influencing the responses to discriminatory automated decisions: the target of discrimination (subject vs. other), the gender identity of the subject, and situational contexts that engender biases. Based on a randomised experiment (N = 602), we found stronger negative reactions to automated decisions that discriminate against the gender group of the subject than those discriminating against other gender groups, evidenced by lower perceived fairness and trust in ADM, and greater negative emotion and tendency to question the outcome. The negative reactions were more pronounced among participants in underserved gender groups than men. Also, participants were more sensitive to biases in economic and occupational contexts than in other situations. These findings suggest that perceptions of algorithmic biases should be understood in relation to the public's lived experience of inequality and injustice in society.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Clinical Decision-Making, Gender Bias, Virtue Epistemology, and Quality Healthcare
    James A. Marcum
    Topoi, 2017, 36 : 501 - 508
  • [32] How gender and emotions bias the credit decision-making in banking firms
    Bacha, Sami
    Azouzi, Mohamed Ali
    JOURNAL OF BEHAVIORAL AND EXPERIMENTAL FINANCE, 2019, 22 : 183 - 191
  • [33] Clinical Decision-Making, Gender Bias, Virtue Epistemology, and Quality Healthcare
    Marcum, James A.
    TOPOI-AN INTERNATIONAL REVIEW OF PHILOSOPHY, 2017, 36 (03): : 501 - 508
  • [34] Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability
    Yurrita, Mireia
    Draws, Tim
    Balayn, Agathe
    Murray-Rust, Dave
    Tintarev, Nava
    Bozzon, Alessandro
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2023), 2023,
  • [35] Bias and discrimination in ML-based systems of administrative decision-making and support
    MAC, Trang Anh
    Computer Law and Security Review, 2024, 55
  • [36] The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making
    Schaap, Gabi
    Bosse, Tibor
    Hendriks Vettehen, Paul
    AI & SOCIETY, 2024, 39 (04) : 1947 - 1960
  • [37] Reviewable Automated Decision-Making
    Cobbe, Jennifer
    Singh, Jatinder
    COMPUTER LAW & SECURITY REVIEW, 2020, 39
  • [38] DECISION-MAKING IN AN AUTOMATED AGE
    EHRLE, RA
    PERSONNEL JOURNAL, 1963, 42 (10) : 492 - 494
  • [39] Principal Fairness for Human and Algorithmic Decision-Making
    Imai, Kosuke
    Jiang, Zhichao
    STATISTICAL SCIENCE, 2023, 38 (02) : 317 - 328
  • [40] Algorithmic Driven Decision-Making Systems in Education
    Ferrero, Federico
    Gewerc, Adriana
    2019 XIV LATIN AMERICAN CONFERENCE ON LEARNING TECHNOLOGIES (LACLO 2019), 2020, : 166 - 173