On the Trustworthiness of Tree Ensemble Explainability Methods

被引:5
|
作者
Yasodhara, Angeline [1 ]
Asgarian, Azin [1 ]
Huang, Diego [1 ]
Sobhani, Parinaz [1 ]
机构
[1] Georgian, 2 St Clair Ave West,Suite 1400, Toronto, ON M4V 1L5, Canada
关键词
Explainability; Trustworthiness; Tree ensemble;
D O I
10.1007/978-3-030-84060-0_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The recent increase in the deployment of machine learning models in critical domains such as healthcare, criminal justice, and finance has highlighted the need for trustworthy methods that can explain these models to stakeholders. Feature importance methods (e.g. gain and SHAP) are among the most popular explainability methods used to address this need. For any explainability technique to be trustworthy and meaningful, it has to provide an explanation that is accurate and stable. Although the stability of local feature importance methods (explaining individual predictions) has been studied before, there is yet a knowledge gap about the stability of global features importance methods (explanations for the whole model). Additionally, there is no study that evaluates and compares the accuracy of global feature importance methods with respect to feature ordering. In this paper, we evaluate the accuracy and stability of global feature importance methods through comprehensive experiments done on simulations as well as four real-world datasets. We focus on tree-based ensemble methods as they are used widely in industry and measure the accuracy and stability of explanations under two scenarios: 1. when inputs are perturbed 2. when models are perturbed. Our findings provide a comparison of these methods under a variety of settings and shed light on the limitations of global feature importance methods by indicating their lack of accuracy with and without noisy inputs, as well as their lack of stability with respect to: 1. increase in input dimension or noise in the data; 2. perturbations in models initialized by different random seeds or hyperparameter settings.
引用
收藏
页码:293 / 308
页数:16
相关论文
共 50 条
  • [21] Exploring the potential of tree-based ensemble methods in solar radiation modeling
    Hassan, Muhammed A.
    Khalil, A.
    Kaseb, S.
    Kassem, M. A.
    [J]. APPLIED ENERGY, 2017, 203 : 897 - 916
  • [22] Predicting musculoskeletal disorders risk using tree-based ensemble methods
    Paraponaris, A.
    Ba, A.
    Gallic, E.
    Liance, Q.
    Michel, Pierre
    [J]. EUROPEAN JOURNAL OF PUBLIC HEALTH, 2019, 29
  • [23] Comparison of decision tree based ensemble methods for prediction of photovoltaic maximum current
    Omer, Zahi M.
    Shareef, Hussain
    [J]. ENERGY CONVERSION AND MANAGEMENT-X, 2022, 16
  • [24] Feature Ranking for Hierarchical Multi-Label Classification with Tree Ensemble Methods
    Petkovic, Matej
    Dzeroski, Saso
    Kocev, Dragi
    [J]. ACTA POLYTECHNICA HUNGARICA, 2020, 17 (10) : 129 - 148
  • [25] Classification of repeated measurements data using tree-based ensemble methods
    Adler, Werner
    Potapov, Sergej
    Lausen, Berthold
    [J]. COMPUTATIONAL STATISTICS, 2011, 26 (02) : 355 - 369
  • [26] Classification of repeated measurements data using tree-based ensemble methods
    Werner Adler
    Sergej Potapov
    Berthold Lausen
    [J]. Computational Statistics, 2011, 26
  • [27] Explainability Metrics and Properties for Counterfactual Explanation Methods
    Singh, Vandita
    Cyras, Kristijonas
    Inam, Rafia
    [J]. EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2022, 2022, 13283 : 155 - 172
  • [28] Explainability Methods for Graph Convolutional Neural Networks
    Pope, Phillip E.
    Kolouri, Soheil
    Rostami, Mohammad
    Martin, Charles E.
    Hoffmann, Heiko
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10764 - 10773
  • [29] A Survey of Explainability Methods in Explainable Recommendation Models
    Gao, Guangshang
    [J]. Data Analysis and Knowledge Discovery, 2024, 8 (8-9) : 6 - 19
  • [30] Evaluating Explainability Methods Intended for Multiple Stakeholders
    Kyle Martin
    Anne Liret
    Nirmalie Wiratunga
    Gilbert Owusu
    Mathias Kern
    [J]. KI - Künstliche Intelligenz, 2021, 35 : 397 - 411