Algorithmic gender bias: investigating perceptions of discrimination in automated decision-making

被引:0
|
作者
Kim, Soojong [1 ,2 ,5 ]
Oh, Poong [3 ]
Lee, Joomi [4 ]
机构
[1] Univ Calif Davis, Dept Commun, Davis, CA USA
[2] Stanford Univ, Stanford Ctr Philanthropy & Civil Soc, Stanford, CA USA
[3] Nanyang Technol Univ, Wee Kim Wee Sch Commun & Informat, Singapore, Singapore
[4] Univ Georgia, Dept Advertising & Publ Relat, Athens, GA USA
[5] Univ Calif Davis, Dept Commun, 361 Kerr Hall, Davis, CA 95616 USA
关键词
Automated decision-making; artificial intelligence; gender; identity; bias; ARTIFICIAL-INTELLIGENCE; UNITED-STATES; SELF; ATTRIBUTIONS; FAIRNESS; EQUALITY; IMPACT; TRUST; MODEL;
D O I
10.1080/0144929X.2024.2306484
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the widespread use of artificial intelligence and automated decision-making (ADM), concerns are increasing about automated decisions biased against certain social groups, such as women and racial minorities. The public's skepticism and the danger of algorithmic discrimination are widely acknowledged, yet the role of key factors constituting the context of discriminatory situations is underexplored. This study examined people's perceptions of gender bias in ADM, focusing on three factors influencing the responses to discriminatory automated decisions: the target of discrimination (subject vs. other), the gender identity of the subject, and situational contexts that engender biases. Based on a randomised experiment (N = 602), we found stronger negative reactions to automated decisions that discriminate against the gender group of the subject than those discriminating against other gender groups, evidenced by lower perceived fairness and trust in ADM, and greater negative emotion and tendency to question the outcome. The negative reactions were more pronounced among participants in underserved gender groups than men. Also, participants were more sensitive to biases in economic and occupational contexts than in other situations. These findings suggest that perceptions of algorithmic biases should be understood in relation to the public's lived experience of inequality and injustice in society.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] The value of responsibility gaps in algorithmic decision-making
    Munch, Lauritz
    Mainz, Jakob
    Bjerring, Jens Christian
    ETHICS AND INFORMATION TECHNOLOGY, 2023, 25 (01)
  • [42] Fairness, Equality, and Power in Algorithmic Decision-Making
    Kasy, Maximilian
    Abebe, Rediet
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 576 - 586
  • [43] A RIGHT TO AN EXPLANATION OF ALGORITHMIC DECISION-MAKING IN CHINA
    Lin, Huanmin
    Wu, Hong
    HONG KONG LAW JOURNAL, 2022, 52 : 1163 - +
  • [44] Pushing the Limits of Fairness in Algorithmic Decision-Making
    Shah, Nisarg
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 7051 - 7056
  • [45] ALGORITHMIC STRUCTURING OF DIALOG DECISION-MAKING SYSTEMS
    ARAKSYAN, VV
    ENGINEERING CYBERNETICS, 1984, 22 (04): : 120 - 124
  • [46] The value of responsibility gaps in algorithmic decision-making
    Lauritz Munch
    Jakob Mainz
    Jens Christian Bjerring
    Ethics and Information Technology, 2023, 25
  • [47] On the Impact of Explanations on Understanding of Algorithmic Decision-Making
    Schmude, Timothee
    Koesten, Laura
    Moeller, Torsten
    Tschiatschek, Sebastian
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 959 - 970
  • [48] Contrastive Counterfactual Fairness in Algorithmic Decision-Making
    Mutlu, Ece Cigdem
    Yousefi, Niloofar
    Garibay, Ozlem Ozmen
    PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 499 - 507
  • [49] The Artificial Recruiter: Risks of Discrimination in Employers' Use of AI and Automated Decision-Making
    Larsson, Stefan
    White, James Merricks
    Bogusz, Claire Ingram
    SOCIAL INCLUSION, 2024, 12
  • [50] Algorithmic Pollution: Understanding and Responding to Negative Consequences of Algorithmic Decision-Making
    Marjanovic, Olivera
    Cecez-Kecmanovic, Dubravka
    Vidgen, Richard
    LIVING WITH MONSTERS?: SOCIAL IMPLICATIONS OF ALGORITHMIC PHENOMENA, HYBRID AGENCY, AND THE PERFORMATIVITY OF TECHNOLOGY, 2018, 543 : 31 - 47