Speciesist bias in AI: a reply to Arandjelović

被引:0
|
作者
Thilo Hagendorff
Leonie Bossert
Tse Yip Fai
Peter Singer
机构
[1] University of Stuttgart,SRF IRIS
[2] University of Tuebingen,International Center for Ethics in the Sciences and Humanities
[3] Princeton University,Center for Information Technology Policy
[4] Princeton University,University Center for Human Values
来源
AI and Ethics | 2023年 / 3卷 / 4期
关键词
Speciesist bias; Fairness; Artificial intelligence; Machine learning; AI ethics; Speciesism;
D O I
10.1007/s43681-023-00319-z
中图分类号
学科分类号
摘要
The elimination of biases in artificial intelligence (AI) applications—for example biases based on race or gender—is a high priority in AI ethics. So far, however, efforts to eliminate bias have all been anthropocentric. Biases against nonhuman animals have not been considered, despite the influence AI systems can have on normalizing, increasing, or reducing the violence that is inflicted on animals, especially on farmed animals. Hence, in 2022, we published a paper in AI and Ethics in which we empirically investigated various examples of image recognition, word embedding, and language models, with the aim of testing whether they perpetuate speciesist biases. A critical response has appeared in AI and Ethics, accusing us of drawing upon theological arguments, having a naive anti-speciesist mindset, and making mistakes in our empirical analyses. We show that these claims are misleading.
引用
收藏
页码:1043 / 1047
页数:4
相关论文
共 50 条
  • [1] Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals
    Thilo Hagendorff
    Leonie N. Bossert
    Yip Fai Tse
    Peter Singer
    [J]. AI and Ethics, 2023, 3 (3): : 717 - 734
  • [2] Apropos of “Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals”
    Ognjen Arandjelović
    [J]. AI and Ethics, 2023, 3 (3): : 1021 - 1023
  • [3] Speciesist language and nonhuman animal bias in English Masked Language Models
    Takeshita, Masashi
    Rzepka, Rafal
    Araki, Kenji
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (05)
  • [4] AI, Bias, and Discrimination
    El-Samad, Hana
    [J]. GEN BIOTECHNOLOGY, 2023, 2 (06): : 445 - 445
  • [5] Biometrics and AI Bias
    Michael, Katina
    Abbas, Roba
    Jayashree, Payyazhi
    Bandara, Ruwan J.
    Aloudat, Anas
    [J]. IEEE Transactions on Technology and Society, 2022, 3 (01): : 2 - 8
  • [6] Managing Bias in AI
    Roselli, Drew
    Matthews, Jeanna
    Talagala, Nisha
    [J]. COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, : 539 - 544
  • [7] Engineering Bias in AI
    Weber, Cynthia
    [J]. IEEE Pulse, 2019, 10 (01): : 15 - 17
  • [8] BIAS IN, BIAS OUT - REPLY
    HIMMELSTEIN, DU
    WOOLHANDLER, S
    [J]. HEALTH AFFAIRS, 1992, 11 (02) : 235 - 238
  • [9] AI pitfalls and what not to do: mitigating bias in AI
    Gichoya, Judy Wawira
    Thomas, Kaesha
    Celi, Leo Anthony
    Safdar, Nabile
    Banerjee, Imon
    Banja, John D.
    Seyyed-Kalantari, Laleh
    Trivedi, Hari
    Purkayastha, Saptarshi
    [J]. BRITISH JOURNAL OF RADIOLOGY, 2023, 96 (1150):
  • [10] Automation Bias in Breast AI
    Baltzer, Pascal A. T.
    [J]. RADIOLOGY, 2023, 307 (04)