On the Impact of Explanations on Understanding of Algorithmic Decision-Making

被引:3
|
作者
Schmude, Timothee [1 ]
Koesten, Laura [2 ]
Moeller, Torsten [3 ]
Tschiatschek, Sebastian [4 ]
机构
[1] Univ Vienna, UniVie Doctoral Sch Comp Sci DoCS Vienna, Fac Comp Sci, Res Network Data Sci, Vienna, Austria
[2] Univ Vienna, Res Grp Visualizat & Data Anal Vienna, Fac Comp Sci, Vienna, Austria
[3] Univ Vienna, Res Grp Visualizat & Data Anal Vienna, Fac Comp Sci, Res Network Data Sci, Vienna, Austria
[4] Univ Vienna, Res Grp Data Min & Machine Learning Vienna, Fac Comp Sci, Res Network Data Sci, Vienna, Austria
关键词
XAI; learning Sciences; algorithmic decision-making; algorithmic fairness; qualitative methods; PERFORMANCE;
D O I
10.1145/3593013.3594054
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ethical principles for algorithms are gaining importance as more and more stakeholders are affected by "high-risk" algorithmic decision-making (ADM) systems. Understanding how these systems work enables stakeholders to make informed decisions and to assess the systems' adherence to ethical values. Explanations are a promising way to create understanding, but current explainable artificial intelligence (XAI) research does not always consider existent theories on how understanding is formed and evaluated. In this work, we aim to contribute to a better understanding of understanding by conducting a qualitative task-based study with 30 participants, including users and affected stakeholders. We use three explanation modalities (textual, dialogue, and interactive) to explain a "high-risk" ADM system to participants and analyse their responses both inductively and deductively, using the "six facets of understanding" framework by Wiggins & McTighe [63]. Our findings indicate that the "six facets" framework is a promising approach to analyse participants' thought processes in understanding, providing categories for both rational and emotional understanding. We further introduce the "dialogue" modality as a valid explanation approach to increase participant engagement and interaction with the "explainer", allowing for more insight into their understanding in the process. Our analysis further suggests that individuality in understanding affects participants' perceptions of algorithmic fairness, demonstrating the interdependence between understanding and ADM assessment that previous studies have outlined. We posit that drawing from theories on learning and understanding like the "six facets" and leveraging explanation modalities can guide XAI research to better suit explanations to learning processes of individuals and consequently enable their assessment of ethical values of ADM systems.
引用
收藏
页码:959 / 970
页数:12
相关论文
共 50 条
  • [1] Algorithmic Pollution: Understanding and Responding to Negative Consequences of Algorithmic Decision-Making
    Marjanovic, Olivera
    Cecez-Kecmanovic, Dubravka
    Vidgen, Richard
    LIVING WITH MONSTERS?: SOCIAL IMPLICATIONS OF ALGORITHMIC PHENOMENA, HYBRID AGENCY, AND THE PERFORMATIVITY OF TECHNOLOGY, 2018, 543 : 31 - 47
  • [2] Understanding the Impact of Transparency on Algorithmic Decision Making Legitimacy
    Goad, David
    Gal, Uri
    LIVING WITH MONSTERS?: SOCIAL IMPLICATIONS OF ALGORITHMIC PHENOMENA, HYBRID AGENCY, AND THE PERFORMATIVITY OF TECHNOLOGY, 2018, 543 : 64 - 79
  • [3] Fairness and algorithmic decision-making
    Giovanola, Benedetta
    Tiribelli, Simona
    TEORIA-RIVISTA DI FILOSOFIA, 2022, 42 (02): : 117 - 129
  • [4] Algorithmic Decision-Making Framework
    Kissell, Robert
    Malamut, Roberto
    JOURNAL OF TRADING, 2006, 1 (01): : 12 - 21
  • [5] Responsible algorithmic decision-making
    Breidbach, Christoph F.
    ORGANIZATIONAL DYNAMICS, 2024, 53 (02)
  • [6] Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability
    Yurrita, Mireia
    Draws, Tim
    Balayn, Agathe
    Murray-Rust, Dave
    Tintarev, Nava
    Bozzon, Alessandro
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2023), 2023,
  • [7] Participation in algorithmic administrative decision-making
    Ramotti, Camilla
    BIOLAW JOURNAL-RIVISTA DI BIODIRITTO, 2024, (03): : 455 - 476
  • [8] Algorithmic legitimacy in clinical decision-making
    Holm, Sune
    ETHICS AND INFORMATION TECHNOLOGY, 2023, 25 (03)
  • [9] Algorithmic Decision-Making and the Control Problem
    John Zerilli
    Alistair Knott
    James Maclaurin
    Colin Gavaghan
    Minds and Machines, 2019, 29 : 555 - 578
  • [10] Statistical evidence and algorithmic decision-making
    Holm, Sune
    SYNTHESE, 2023, 202 (01)