Towards a pragmatist dealing with algorithmic bias in medical machine learning

被引:34
|
作者
Starke, Georg [1 ]
De Clercq, Eva [1 ]
Elger, Bernice S. [1 ,2 ]
机构
[1] Univ Basel, Inst Biomed Eth, Basel, Switzerland
[2] Univ Geneva, Ctr Legal Med, Geneva, Switzerland
关键词
Artificial intelligence; Machine learning; Pragmatism; Philosophy of Science; Algorithmic bias; Fairness; CROSS-CULTURAL VALIDATION; REPORTED OUTCOME MEASURE; ARTIFICIAL-INTELLIGENCE; DIAGNOSIS; LUPUS;
D O I
10.1007/s11019-021-10008-5
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous treatment. In the curation of training data this strategy runs into severe problems though, since distinguishing between the two can be next to impossible. We thus plead for a pragmatist dealing with algorithmic bias in healthcare environments. By recurring to a recent reformulation of William James's pragmatist understanding of truth, we recommend that, instead of aiming at a supposedly objective truth, outcome-based therapeutic usefulness should serve as the guiding principle for assessing ML applications in medicine.
引用
收藏
页码:341 / 349
页数:9
相关论文
共 50 条
  • [1] Towards a pragmatist dealing with algorithmic bias in medical machine learning
    Georg Starke
    Eva De Clercq
    Bernice S. Elger
    Medicine, Health Care and Philosophy, 2021, 24 : 341 - 349
  • [2] Towards a holistic view of bias in machine learning: bridging algorithmic fairness and imbalanced learning
    Damien Dablain
    Bartosz Krawczyk
    Nitesh Chawla
    Discover Data, 2 (1):
  • [3] Algorithmic Factors Influencing Bias in Machine Learning
    Blanzeisky, William
    Cunningham, Padraig
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 559 - 574
  • [4] An information theoretic approach to reducing algorithmic bias for machine learning
    Kim, Jin-Young
    Cho, Sung-Bae
    NEUROCOMPUTING, 2022, 500 : 26 - 38
  • [5] Algorithmic bias in machine learning-based marketing models
    Akter, Shahriar
    Dwivedi, Yogesh K.
    Sajib, Shahriar
    Biswas, Kumar
    Bandara, Ruwan J.
    Michael, Katina
    JOURNAL OF BUSINESS RESEARCH, 2022, 144 : 201 - 216
  • [6] CfCV : Towards algorithmic debiasing in machine learning experiment
    Akintande, Olalekan Joseph
    Olubusoye, Olusanya Elisa
    INTELLIGENT SYSTEMS WITH APPLICATIONS, 2024, 22
  • [7] Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning
    Yang, Jenny
    Soltan, Andrew A. S.
    Eyre, David W.
    Clifton, David A.
    NATURE MACHINE INTELLIGENCE, 2023, 5 (08) : 884 - +
  • [8] Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning
    Jenny Yang
    Andrew A. S. Soltan
    David W. Eyre
    David A. Clifton
    Nature Machine Intelligence, 2023, 5 : 884 - 894
  • [9] Using Pareto simulated annealing to address algorithmic bias in machine learning
    Blanzeisky, William
    Cunningham, Padraig
    KNOWLEDGE ENGINEERING REVIEW, 2022, 37
  • [10] Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness
    Finocchiaro, Jessie
    Maio, Roland
    Monachou, Faidra
    Patro, Gourab K.
    Raghavan, Manish
    Stoica, Ana-Andreea
    Tsirtsis, Stratis
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 489 - 503