Truthful meta-explanations for local interpretability of machine learning models

被引:0
|
作者
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
机构
[1] Aristotle University of Thessaloniki,School of Informatics
来源
Applied Intelligence | 2023年 / 53卷
关键词
Explainable artificial intelligence; Interpretable machine learning; Local interpretation; Meta-explanations; Evaluation; Argumentation;
D O I
暂无
中图分类号
学科分类号
摘要
Automated Machine Learning-based systems’ integration into a wide range of tasks has expanded as a result of their performance and speed. Although there are numerous advantages to employing ML-based systems, if they are not interpretable, they should not be used in critical or high-risk applications. To address this issue, researchers and businesses have been focusing on finding ways to improve the explainability of complex ML systems, and several such methods have been developed. Indeed, there are so many developed techniques that it is difficult for practitioners to choose the best among them for their applications, even when using evaluation metrics. As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent. In this paper, we present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric. We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.
引用
收藏
页码:26927 / 26948
页数:21
相关论文
共 50 条
  • [41] On the Interpretability of Machine Learning Models and Experimental Feature Selection in Case of Multicollinear Data
    Drobnic, Franc
    Kos, Andrej
    Pustisek, Matevz
    [J]. ELECTRONICS, 2020, 9 (05)
  • [42] Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
    Wang, Zijie J.
    Kale, Alex
    Nori, Harsha
    Stella, Peter
    Nunnally, Mark E.
    Chau, Duen Horng
    Vorvoreanu, Mihaela
    Vaughan, Jennifer Wortman
    Caruana, Rich
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4132 - 4142
  • [43] Techniques to Improve Ecological Interpretability of Black-Box Machine Learning Models
    Welchowski, Thomas
    Maloney, Kelly O.
    Mitchell, Richard
    Schmid, Matthias
    [J]. JOURNAL OF AGRICULTURAL BIOLOGICAL AND ENVIRONMENTAL STATISTICS, 2022, 27 (01) : 175 - 197
  • [44] A Review of Framework for Machine Learning Interpretability
    Araujo, Ivo de Abreu
    Torres, Renato Hidaka
    Sampaio Neto, Nelson Cruz
    [J]. AUGMENTED COGNITION, AC 2022, 2022, 13310 : 261 - 272
  • [45] A Study on Interpretability of Decision of Machine Learning
    Shirataki, Shohei
    Yamaguchi, Saneyasu
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2017, : 4830 - 4831
  • [46] Evaluating molecular representations in machine learning models for drug response prediction and interpretability
    Baptista, Delora
    Correia, Joao
    Pereira, Bruno
    Rocha, Miguel
    [J]. JOURNAL OF INTEGRATIVE BIOINFORMATICS, 2022, 19 (03)
  • [47] Comparing Strategies for Post-Hoc Explanations in Machine Learning Models
    Vij, Aabhas
    Nanjundan, Preethi
    [J]. MOBILE COMPUTING AND SUSTAINABLE INFORMATICS, 2022, 68 : 585 - 592
  • [48] A framework for falsifiable explanations of machine learning models with an application in computational pathology
    Schuhmacher, David
    Schorner, Stephanie
    Kupper, Claus
    Grosserueschkamp, Frederik
    Sternemann, Carlo
    Lugnier, Celine
    Kraeft, Anna-Lena
    Jutte, Hendrik
    Tannapfel, Andrea
    Reinacher-Schick, Anke
    Gerwert, Klaus
    Mosig, Axel
    [J]. MEDICAL IMAGE ANALYSIS, 2022, 82
  • [49] Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models
    Mishra, Swati
    Rzeszotarski, Jeffrey M.
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)
  • [50] Quantile-constrained Wasserstein projections for robust interpretability of numerical and machine learning models
    Il Idrissi, Marouane
    Bousquet, Nicolas
    Gamboa, Fabrice
    Iooss, Bertrand
    Loubes, Jean-Michel
    [J]. ELECTRONIC JOURNAL OF STATISTICS, 2024, 18 (02): : 2721 - 2770