Beyond bias and discrimination: redefining the Al ethics principle of fairness in healthcare machine-learning algorithms

被引:38
|
作者
Giovanola, Benedetta [1 ,3 ]
Tiribelli, Simona [1 ,2 ]
机构
[1] Univ Macerata, Dept Polit Sci Commun & Int Relat, I-62100 Macerata, Italy
[2] PathCheck Fdn, Inst Technol & Global Hlth, 955 Massachusetts Ave, Cambridge, MA 02139 USA
[3] Tufts Univ, Dept Philosophy, 222 Miner Hall, Medford, MA 02155 USA
关键词
Fairness; Healthcare machine-learning algorithms; Bias; Discrimination; Ethics of algorithms; Respect; RESPECT; PREDICTION; DIAGNOSIS;
D O I
10.1007/s00146-022-01455-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.
引用
收藏
页码:549 / 563
页数:15
相关论文
共 9 条
  • [1] Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms
    Benedetta Giovanola
    Simona Tiribelli
    [J]. AI & SOCIETY, 2023, 38 : 549 - 563
  • [2] Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms (vol 38, pg 549, 2023)
    Giovanola, Benedetta
    Tiribelli, Simona
    [J]. AI & SOCIETY, 2024, 39 (05) : 2637 - 2637
  • [3] Exploring Bias and Fairness in Artificial Intelligence and Machine Learning Algorithms
    Khakurel, Utsab
    Abdelmoumin, Ghada
    Bajracharya, Aakriti
    Rawat, Danda B.
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS IV, 2022, 12113
  • [4] Bias, Fairness and Accountability with Artificial Intelligence and Machine Learning Algorithms
    Zhou, Nengfeng
    Zhang, Zach
    Nair, Vijayan N.
    Singhal, Harsh
    Chen, Jie
    [J]. INTERNATIONAL STATISTICAL REVIEW, 2022, 90 (03) : 468 - 480
  • [5] Stand types discrimination comparing machine-learning algorithms in Monteverde, Canary Islands
    Garcia-Hidalgo, Miguel
    Blazquez-Casado, Angela
    Agueda, Beatriz
    Rodriguez, Francisco
    [J]. FOREST SYSTEMS, 2018, 27 (03)
  • [6] Limitations of mitigating judicial bias with machine learning Machine-learning algorithms trained with data that encode human bias will reproduce, not eliminate, the bias, says Kristian Lum.
    Lum, Kristian
    [J]. NATURE HUMAN BEHAVIOUR, 2017, 1 (07):
  • [7] The promises and perils of machine learning algorithms to reduce bias and discrimination in personnel selection procedures
    Hiemstra, Annemarie M. F.
    Cassel, Tatjana
    Born, Marise Ph
    Liem, Cynthia C. S.
    [J]. GEDRAG & ORGANISATIE, 2020, 33 (04): : 279 - 299
  • [8] The Dark Side of Machine Learning Algorithms: How and Why They Can Leverage Bias, and What Can Be Done to Pursue Algorithmic Fairness
    Vasileva, Mariya, I
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3586 - 3587
  • [9] Investigating for bias in healthcare algorithms: a sex-stratified analysis of supervised machine learning models in liver disease prediction
    Straw, Isabel
    Wu, Honghan
    [J]. BMJ HEALTH & CARE INFORMATICS, 2022, 29 (01)