Learning Global Transparent Models Consistent with Local Contrastive Explanations

被引:0
|
作者
Pedapati, Tejaswini [1 ]
Balakrishnan, Avinash [1 ]
Shanmugan, Karthikeyan [1 ]
Dhurandhar, Amit [1 ]
机构
[1] IBM Res, Yorktown Hts, NY 10598 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is a rich and growing literature on producing local contrastive/counterfactual explanations for black-box models (e.g. neural networks). In these methods, for an input, an explanation is in the form of a contrast point differing in very few features from the original input and lying in a different class. Other works try to build globally interpretable models like decision trees and rule lists based on the data using actual labels or based on the black-box models predictions. Although these interpretable global models can be useful, they may not be consistent with local explanations from a specific black-box of choice. In this work, we explore the question: Can we produce a transparent global model that is simultaneously accurate and consistent with the local (contrastive) explanations of the black-box model? We introduce a natural local consistency metric that quantifies if the local explanations and predictions of the black-box model are also consistent with the proxy global transparent model. Based on a key insight we propose a novel method where we create custom boolean features from sparse local contrastive explanations of the black-box model and then train a globally transparent model on just these, and showcase empirically that such models have higher local consistency compared with other known strategies, while still being close in performance to models that are trained with access to the original data.
引用
下载
收藏
页数:11
相关论文
共 50 条
  • [21] Truthful meta-explanations for local interpretability of machine learning models
    Mollas, Ioannis
    Bassiliades, Nick
    Tsoumakas, Grigorios
    APPLIED INTELLIGENCE, 2023, 53 (22) : 26927 - 26948
  • [22] Privacy-Preserving Contrastive Explanations with Local Foil Trees
    Veugen, Thijs
    Kamphorst, Bart
    Marcus, Michiel
    CYBER SECURITY, CRYPTOLOGY, AND MACHINE LEARNING, 2022, 13301 : 88 - 98
  • [23] Privacy-Preserving Contrastive Explanations with Local Foil Trees
    Veugen, Thijs
    Kamphorst, Bart
    Marcus, Michiel
    CRYPTOGRAPHY, 2022, 6 (04)
  • [24] Truthful meta-explanations for local interpretability of machine learning models
    Ioannis Mollas
    Nick Bassiliades
    Grigorios Tsoumakas
    Applied Intelligence, 2023, 53 : 26927 - 26948
  • [25] Hierarchical graph contrastive learning of local and global presentation for multimodal sentiment analysis
    Du, Jun
    Jin, Jianhang
    Zhuang, Jian
    Zhang, Cheng
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [26] Fusing Global and Local Interests with Contrastive Learning in Session-Based Recommendation
    Zhang, Su
    Tao, Ye
    Li, Ying
    Wu, Zhonghai
    WEB AND BIG DATA, PT IV, APWEB-WAIM 2023, 2024, 14334 : 343 - 358
  • [27] Accelerating the Global Aggregation of Local Explanations
    Mor, Alon
    Belinkov, Yonatan
    Kimelfeld, Benny
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18807 - 18814
  • [28] Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
    Balestriero, Randall
    LeCun, Yann
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] Contrastive Visual Explanations for Reinforcement Learning via Counterfactual Rewards
    Liu, Xiaowei
    McAreavey, Kevin
    Liu, Weiru
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT II, 2023, 1902 : 72 - 87
  • [30] Persona Consistent Dialogue Generation via Contrastive Learning
    Han, Zhenfeng
    Zhang, Sai
    Zhang, Xiaowang
    COMPANION OF THE WORLD WIDE WEB CONFERENCE, WWW 2023, 2023, : 196 - 199