Measuring Model Understandability by means of Shapley Additive Explanations

被引:7
|
作者
Mariotti, Ettore [1 ]
Alonso-Moral, Jose M. [1 ]
Gatt, Albert [2 ]
机构
[1] Univ Santiago de Compostela, Ctr Singular Invest Tecnoloxfas Intelixent, Santiago De Compostela, Spain
[2] Univ Utrecht, Utrecht, Netherlands
关键词
NUMBER; 7; PLUS;
D O I
10.1109/FUZZ-IEEE55066.2022.9882773
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work we link the understandability of machine learning models to the complexity of their SHapley Additive exPlanations (SHAP). Thanks to this reframing we introduce two novel metrics for understandability: SHAP Length and SHAP Interaction Length. These are model-agnostic, efficient, intuitive and theoretically grounded metrics that are anchored in well established game-theoretic and psychological principles. We show how these metrics resonate with other model-specific ones and how they can enable a fairer comparison of epistemically different models in the context of Explainable Artificial Intelligence. In particular, we quantitatively explore the understandability performance tradeoff of different models which are applied to both classification and regression problems. Reported results suggest the value of the new metrics in the context of automated machine learning and multi-objective optimisation.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Neural network models and shapley additive explanations for a beam-ring structure
    Sun, Ying
    Zhang, Luying
    Yao, Minghui
    Zhang, Junhua
    CHAOS SOLITONS & FRACTALS, 2024, 185
  • [32] How can SHAP (SHapley Additive exPlanations) interpretations improve deep learning based urban cellular automata model?
    Yang, Changlan
    Guan, Xuefeng
    Xu, Qingyang
    Xing, Weiran
    Chen, Xiaoyu
    Chen, Jinguo
    Jia, Peng
    COMPUTERS ENVIRONMENT AND URBAN SYSTEMS, 2024, 111
  • [33] Enhancing the explainability of regression-based polynomial chaos expansion by Shapley additive explanations
    Palar, Pramudita Satria
    Zuhal, Lavi Rizki
    Shimoyama, Koji
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 232
  • [34] Detection of Monkeypox Cases Based on Symptoms Using XGBoost and Shapley Additive Explanations Methods
    Farzipour, Alireza
    Elmi, Roya
    Nasiri, Hamid
    DIAGNOSTICS, 2023, 13 (14)
  • [35] Exploring kinase family inhibitors and their moiety preferences using deep SHapley additive exPlanations
    Fan, You-Wei
    Liu, Wan-Hsin
    Chen, Yun-Ti
    Hsu, Yen-Chao
    Pathak, Nikhil
    Huang, Yu-Wei
    Yang, Jinn-Moon
    BMC Bioinformatics, 2022, 23
  • [36] SHapley Additive exPlanations for Explaining Artificial Neural Network Based Mode Choice Models
    Anil Koushik
    M. Manoj
    N. Nezamuddin
    Transportation in Developing Economies, 2024, 10
  • [37] An Explainable Prediction Model for Aerodynamic Noise of an Engine Turbocharger Compressor Using an Ensemble Learning and Shapley Additive Explanations Approach
    Huang, Rong
    Ni, Jimin
    Qiao, Pengli
    Wang, Qiwei
    Shi, Xiuyong
    Yin, Qi
    SUSTAINABILITY, 2023, 15 (18)
  • [38] Shapley Residuals: Quantifying the limits of the Shapley value for explanations
    Kumar, I. Elizabeth
    Scheidegger, Carlos
    Venkatasubramanian, Suresh
    Friedler, Sorelle A.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [39] AI for Automating Data Center Operations: Model Explainability in the Data Centre Context Using Shapley Additive Explanations (SHAP)
    Gebreyesus, Yibrah
    Dalton, Damian
    De Chiara, Davide
    Chinnici, Marta
    Chinnici, Andrea
    ELECTRONICS, 2024, 13 (09)
  • [40] Electricity Consumption Forecasting: An Approach Using Cooperative Ensemble Learning with SHapley Additive exPlanations
    Alba, Eduardo Luiz
    Oliveira, Gilson Adamczuk
    Ribeiro, Matheus Henrique Dal Molin
    Rodrigues, erick Oliveira
    FORECASTING, 2024, 6 (03): : 839 - 863