Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models

被引:0
|
作者
Sui, Yi [1 ]
Wu, Ga [1 ,2 ]
Sanner, Scott [1 ,3 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Borealis AI, Toronto, ON, Canada
[3] Vector Inst Artificial Intelligence, Toronto, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explaining the influence of training data on machine learning model predictions is a critical tool for debugging models through data curation. A recent appealing and efficient approach for this task was provided via the concept of Representer Point Selection (RPS), i.e. a method the leverages the dual form of l(2) regularized optimization in the last layer of the neural network to identify the contribution of training points to the prediction. However, two key drawbacks of RPS-l(2) are that they (i) lead to disagreement between the originally trained network and the RPS-l(2) regularized network modification and (ii) often yield a static ranking of training data for test points in the same class, independent of the test point being classified. Inspired by the RPS-l(2) approach, we propose an alternative method based on a local Jacobian Taylor expansion (LJE). We empirically compared RPS-LJE with the original RPS-l(2) on image classification (with ResNet), text classification recurrent neural networks (with Bi-LSTM), and tabular classification (with XGBoost) tasks. Quantitatively, we show that RPS-LJE slightly outperforms RPS-l(2) and other state-of-the-art data explanation methods by up to 3% on a data debugging task. More critically, we qualitatively observe that RPS-LJE provides stable and individualized explanations that are more coherent to each test data point. Overall, RPS-LJE represents a novel approach to RPS-l(2) that provides a powerful tool for sample-based model explanation and debugging.
引用
收藏
页数:12
相关论文
共 5 条
  • [1] Representer Point Selection for Explaining Deep Neural Networks
    Yeh, Chih-Kuan
    Kim, Joon Sik
    Yen, Ian E. H.
    Ravikumar, Pradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
    Jin, Weina
    Li, Xiaoxiao
    Fatehi, Mostafa
    Hamarneh, Ghassan
    METHODSX, 2023, 10
  • [3] Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis
    Fang, Junfeng
    Liu, Wei
    Gao, Yuan
    Liu, Zemin
    Zhang, An
    Wang, Xiang
    He, Xiangnan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks
    Cattelan, Luis Felipe P.
    Silva, Danilo
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2024, 244 : 547 - 584
  • [5] How Case-Based Reasoning Explains Neural Networks: A Theoretical Analysis of XAI Using Post-Hoc Explanation-by-Example from a Survey of ANN-CBR Twin-Systems
    Keane, Mark T.
    Kenny, Eoin M.
    CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2019, 2019, 11680 : 155 - 171