Algorithmic gender bias: investigating perceptions of discrimination in automated decision-making

被引:0
|
作者
Kim, Soojong [1 ,2 ,5 ]
Oh, Poong [3 ]
Lee, Joomi [4 ]
机构
[1] Univ Calif Davis, Dept Commun, Davis, CA USA
[2] Stanford Univ, Stanford Ctr Philanthropy & Civil Soc, Stanford, CA USA
[3] Nanyang Technol Univ, Wee Kim Wee Sch Commun & Informat, Singapore, Singapore
[4] Univ Georgia, Dept Advertising & Publ Relat, Athens, GA USA
[5] Univ Calif Davis, Dept Commun, 361 Kerr Hall, Davis, CA 95616 USA
关键词
Automated decision-making; artificial intelligence; gender; identity; bias; ARTIFICIAL-INTELLIGENCE; UNITED-STATES; SELF; ATTRIBUTIONS; FAIRNESS; EQUALITY; IMPACT; TRUST; MODEL;
D O I
10.1080/0144929X.2024.2306484
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the widespread use of artificial intelligence and automated decision-making (ADM), concerns are increasing about automated decisions biased against certain social groups, such as women and racial minorities. The public's skepticism and the danger of algorithmic discrimination are widely acknowledged, yet the role of key factors constituting the context of discriminatory situations is underexplored. This study examined people's perceptions of gender bias in ADM, focusing on three factors influencing the responses to discriminatory automated decisions: the target of discrimination (subject vs. other), the gender identity of the subject, and situational contexts that engender biases. Based on a randomised experiment (N = 602), we found stronger negative reactions to automated decisions that discriminate against the gender group of the subject than those discriminating against other gender groups, evidenced by lower perceived fairness and trust in ADM, and greater negative emotion and tendency to question the outcome. The negative reactions were more pronounced among participants in underserved gender groups than men. Also, participants were more sensitive to biases in economic and occupational contexts than in other situations. These findings suggest that perceptions of algorithmic biases should be understood in relation to the public's lived experience of inequality and injustice in society.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Gender discrimination in algorithmic decision-making
    Andreeva, Galina
    Matuszyk, Anna
    2ND INTERNATIONAL CONFERENCE ON ADVANCED RESEARCH METHODS AND ANALYTICS (CARMA 2018), 2018, : 251 - 251
  • [2] Automated decision-making: Hoteliers' perceptions
    Ivanov, Stanislav
    Webster, Craig
    TECHNOLOGY IN SOCIETY, 2024, 76
  • [3] Epistemic Therapy for Bias in Automated Decision-Making
    Gilbert, Thomas Krendl
    Mintz, Yonatan
    AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 61 - 67
  • [4] How Gender and Type of Algorithmic Group Discrimination Influence Ratings of Algorithmic Decision Making
    Utz, Sonja
    INTERNATIONAL JOURNAL OF COMMUNICATION, 2024, 18 : 570 - 589
  • [5] Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems
    Cobbe, Jennifer
    Lee, Michelle Seng Ah
    Singh, Jatinder
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 598 - 609
  • [6] AUTOMATED DECISION-MAKING AND DISCRIMINATION: GENERAL APPROACH AND PROPOSALS
    Soriano Arnanz, Alba
    REVISTA GENERAL DE DERECHO ADMINISTRATIVO, 2021, (56):
  • [7] Fairness and algorithmic decision-making
    Giovanola, Benedetta
    Tiribelli, Simona
    TEORIA-RIVISTA DI FILOSOFIA, 2022, 42 (02): : 117 - 129
  • [8] Algorithmic Decision-Making Framework
    Kissell, Robert
    Malamut, Roberto
    JOURNAL OF TRADING, 2006, 1 (01): : 12 - 21
  • [9] Responsible algorithmic decision-making
    Breidbach, Christoph F.
    ORGANIZATIONAL DYNAMICS, 2024, 53 (02)
  • [10] Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature
    Starke, Christopher
    Baleis, Janine
    Keller, Birte
    Marcinkowski, Frank
    BIG DATA & SOCIETY, 2022, 9 (02):