Counterfactual explanations for misclassified images: How human and machine explanations differ

被引:3
|
作者
Delaney, Eoin [1 ,2 ,3 ]
Pakrashi, Arjun [1 ,3 ]
Greene, Derek [1 ,2 ,3 ]
Keane, Mark T. [1 ,3 ]
机构
[1] Univ Coll Dublin, Sch Comp Sci, Dublin, Ireland
[2] Insight Ctr Data Analyt, Dublin, Ireland
[3] VistaMilk SFI Res Ctr, Dublin, Ireland
基金
爱尔兰科学基金会;
关键词
XAI; Counterfactual explanation; User testing; BLACK-BOX;
D O I
10.1016/j.artint.2023.103995
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (similar to 7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a "good explanation". This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not "minimally edit" images when generating counterfactual explanations. Instead, they make larger, "meaningful" edits that better approximate prototypes in the counterfactual class. An analysis based on "explanation goals" is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons .org /licenses /by /4 .0/).
引用
收藏
页数:25
相关论文
共 50 条
  • [41] Generating Natural Counterfactual Visual Explanations
    Zhao, Wenqi
    Oyama, Satoshi
    Kurihara, Masahito
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 5204 - 5205
  • [42] Metaphysical explanations and the counterfactual theory of explanation
    Stefan Roski
    Philosophical Studies, 2021, 178 : 1971 - 1991
  • [43] Counterfactual Models for Fair and Adequate Explanations
    Asher, Nicholas
    De Lara, Lucas
    Paul, Soumya
    Russell, Chris
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2022, 4 (02): : 371 - 396
  • [44] Counterfactual Explanations for Prediction and Diagnosis in XAI
    Dai, Xinyue
    Keane, Mark T.
    Shalloo, Laurence
    Ruelle, Elodie
    Byrne, Ruth M. J.
    PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 215 - 226
  • [45] Interval abstractions for robust counterfactual explanations
    Jiang, Junqi
    Leofante, Francesco
    Rago, Antonio
    Toni, Francesca
    ARTIFICIAL INTELLIGENCE, 2024, 336
  • [46] Interpretable Counterfactual Explanations Guided by Prototypes
    Van Looveren, Arnaud
    Klaise, Janis
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT II, 2021, 12976 : 650 - 665
  • [47] FACE: Feasible and Actionable Counterfactual Explanations
    Poyiadzi, Rafael
    Sokol, Kacper
    Santos-Rodriguez, Raul
    De Bie, Tijl
    Flach, Peter
    PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020, 2020, : 344 - 350
  • [48] On Counterfactual Explanations under Predictive Multiplicity
    Pawelczyk, Martin
    Broelemann, Klaus
    Kasneci, Gjergji
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 809 - 818
  • [49] Evolutionary explanations in medicine: How do they differ and how to benefit from them
    Lozano, George A.
    MEDICAL HYPOTHESES, 2010, 74 (04) : 746 - 749
  • [50] How Close Is Too Close? The Role of Feature Attributions in Discovering Counterfactual Explanations
    Wijekoon, Anjana
    Wiratunga, Nirmalie
    Nkisi-Orji, Ikechukwu
    Palihawadana, Chamath
    Corsar, David
    Martin, Kyle
    CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2022, 2022, 13405 : 33 - 47