Discrimination, Bias, Fairness, and Trustworthy AI

被引:16
|
作者
Varona, Daniel [1 ]
Suarez, Juan Luis [1 ]
机构
[1] CulturePlex Lab, London, ON N6A 3K6, Canada
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 12期
关键词
discrimination; bias; fairness; trustworthy ADMS; principled AI; social impact of AI; ethics and AI;
D O I
10.3390/app12125826
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Featured Application To understand the multiple definitions available for the variables "Discrimination", "Bias", "Fairness", and "Trustworthy AI" in the context of the social impact of algorithmic decision-making systems (ADMS), pursuing to reach consensus as working variables for the referred context. In this study, we analyze "Discrimination", "Bias", "Fairness", and "Trustworthiness" as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization-specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project's lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI International Framework, so we included sources from outside the framework to complement (from a conceptual standpoint) their study and their relationship with each other.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Lorenz Zonoids for Trustworthy AI
    Giudici, Paolo
    Raffinetti, Emanuela
    [J]. EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 517 - 530
  • [32] Simion and Kelp on trustworthy AI
    Carter J.A.
    [J]. Asian Journal of Philosophy, 2 (1):
  • [33] MAKING MEDICAL AI TRUSTWORTHY
    Strickland, Eliza
    [J]. IEEE SPECTRUM, 2018, 55 (08) : 8 - 9
  • [34] Trustworthy AI - Part III
    Mariani, Riccardo
    Rossi, Francesca
    Cucchiara, Rita
    Pavone, Marco
    Simkin, Barnaby
    Koene, Ansgar
    Papenbrock, Jochen
    [J]. Computer, 2024, 57 (03) : 22 - 24
  • [35] Trustworthy AI and Data Lineage
    Bertino, Elisa
    Bhattacharya, Suparna
    Ferrari, Elena
    Milojicic, Dejan
    [J]. IEEE INTERNET COMPUTING, 2023, 27 (06) : 5 - 6
  • [36] Checklist for Validating Trustworthy AI
    Han, Seung-Ho
    Choi, Ho-Jin
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (IEEE BIGCOMP 2022), 2022, : 391 - 394
  • [37] Trustworthy journalism through AI
    Opdahl, Andreas L.
    Tessem, Bjornar
    Dang-Nguyen, Duc-Tien
    Motta, Enrico
    Setty, Vinay
    Throndsen, Eivind
    Tverberg, Are
    Trattner, Christoph
    [J]. DATA & KNOWLEDGE ENGINEERING, 2023, 146
  • [38] Trustworthy AI in Medicine and Healthcare
    Doroshenko, Anastasiya
    [J]. 5TH INTERNATIONAL CONFERENCE ON INFORMATICS & DATA-DRIVEN MEDICINE, IDDM 2022, 2022, 3302
  • [39] Trustworthy AI for safe medicines
    Stegmann, Jens-Ulrich
    Littlebury, Rory
    Trengove, Markus
    Goetz, Lea
    Bate, Andrew
    Branson, Kim M.
    [J]. NATURE REVIEWS DRUG DISCOVERY, 2023, 22 (10) : 855 - 856
  • [40] Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining
    Hajian, Sara
    Bonchi, Francesco
    Castillo, Carlos
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 2125 - 2126