Using the K-associated Optimal Graph to Provide Counterfactual Explanations

被引:1
|
作者
da Silva, Ariel Tadeu [1 ]
Bertini Junior, Joao Roberto [1 ]
机构
[1] Univ Estadual Campinas, Sch Technol, Limeira, Brazil
关键词
D O I
10.1109/FUZZ-IEEE55066.2022.9882751
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Only recently have data mining results been thought to aid human interpretability. Explanations are useful to understand the reasons why (or why not) the model has (or hasn't) achieved a given decision. Counterfactual explanations aim to explain why not the model yield an expected decision. A counterfactual explanation is usually done by a post -hoc algorithm which often requires access to training data or model details. Also, the majority of such algorithms do not generate robust explanations, once they assume the model is noise -free and will not be updated over time. This paper proposes a model agnostic, training data -independent algorithm to provide robust counterfactual explanations. The proposed method generates data samples around the instance to be explained and builds a K-Associated Optimal Graph (KAOG) with those data. KAOG allows measuring how intertwined the data examples are regarding their classes. This way, the explanation method can search for an example that relies on a noise -free area in the attribute space, granting trust to the explanation. Experiment results on counterfactual feasibility and distance from query data show the effectiveness of the proposed algorithm when compared to ten state-of-the-art methods on three data sets.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Individualized help for at-risk students using model-agnostic and counterfactual explanations
    Bevan I. Smith
    Charles Chimedza
    Jacoba H. Bührmann
    Education and Information Technologies, 2022, 27 : 1539 - 1558
  • [32] Machine Learning and Explainable Artificial Intelligence Using Counterfactual Explanations for Evaluating Posture Parameters
    Dindorf, Carlo
    Ludwig, Oliver
    Simon, Steven
    Becker, Stephan
    Froehlich, Michael
    BIOENGINEERING-BASEL, 2023, 10 (05):
  • [33] Individualized help for at-risk students using model-agnostic and counterfactual explanations
    Smith, Bevan, I
    Chimedza, Charles
    Buhrmann, Jacoba H.
    EDUCATION AND INFORMATION TECHNOLOGIES, 2022, 27 (02) : 1539 - 1558
  • [34] TOWARDS EXPLAINABLE CLASSIFIERS USING THE COUNTERFACTUAL APPROACH - GLOBAL EXPLANATIONS FOR DISCOVERING BIAS IN DATA
    Mikolajczyk, Agnieszka
    Grochowski, Michal
    Kwasigroch, Arkadiusz
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2021, 11 (01) : 51 - 67
  • [35] CausalKG: Causal Knowledge Graph Explainability Using Interventional and Counterfactual Reasoning
    Jaimini, Utkarshani
    Sheth, Amit
    IEEE INTERNET COMPUTING, 2022, 26 (01) : 43 - 50
  • [36] Fuzzy-LORE: A Method for Extracting Local and Counterfactual Explanations Using Fuzzy Decision Trees
    Maaroof, Najlaa
    Moreno, Antonio
    Jabreel, Mohammed
    Valls, Aida
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 345 - 354
  • [37] Enforcing Optimal ACL Policies Using K-Partite Graph in Hybrid SDN
    Amin, Rashid
    Shah, Nadir
    Mehmood, Wagar
    ELECTRONICS, 2019, 8 (06)
  • [38] Using explanations to provide transparency during trust-guided behavior adaptation
    Floyd, Michael W.
    Aha, David W.
    AI COMMUNICATIONS, 2017, 30 (3-4) : 281 - 294
  • [39] Real-Time, Model-Agnostic and User-Driven Counterfactual Explanations Using Autoencoders
    Soto, Jokin Labaien
    Uriguen, Ekhi Zugasti
    Garcia, Xabier De Carlos
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [40] Supporting organizational decisions on How to improve customer repurchase using multi-instance counterfactual explanations
    Artelt, Andre
    Gregoriades, Andreas
    DECISION SUPPORT SYSTEMS, 2024, 182