Analysis of characteristic functions on Shapley values in Machine Learning

被引:0
|
作者
Jamshidi, Parisa [1 ]
Nowaczyk, Slawomir [1 ]
Rahat, Mahmoud [1 ]
机构
[1] Halmstad Univ, Ctr Appl Intelligent Syst Res CAISR, Halmstad, Sweden
来源
2024 INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS, IE 2024 | 2024年
基金
瑞典研究理事会;
关键词
Shapley values; imbalanced data; XAI; F1-score; accuracy;
D O I
10.1109/IE61493.2024.00020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the rapidly evolving field of AI, Explainable Artificial Intelligence (XAI) has become paramount, particularly in Intelligent Environments applications. It offers clarity and understanding in complex decision-making processes, fostering trust and enabling rigorous scrutiny. The Shapley value, renowned for its accurate quantification of feature importance, has emerged as a prevalent standard in both academic research and practical application. Nevertheless, the Shapley value's reliance on the calculation of all possible coalitions poses a significant computational challenge, as it falls within the class of NP-hard problems. Consequently, approximation techniques are employed in most practical scenarios as a substitute for precise computations. The most common of those is the SHAP (SHapley Additive exPlanations) technique, which quantifies the influence exerted by a specific feature on decision outcomes of a specific Machine Learning model. However, the Shapley value's theoretical underpinnings focus on assessing and understanding feature impact on model evaluation metrics, rather than just alterations in the responses. This paper conducts a comparative analysis using controlled synthetic data with established ground truths. It juxtaposes the practical implementation of the SHAP approach with the theoretical model in two distinct scenarios: one using the F1-score and the other, the accuracy metric. These are two representative characteristic functions, capturing different aspects and whose appropriateness depends on the specific requirements and context of the task to be solved. We analyze how the three alternatives exhibit similarity and disparity in their manifestation of feature effects. We explore the parallels and differences between these approaches in reflecting feature effects. Ultimately, our research seeks to determine the conditions under which SHAP outcomes are more aligned with either the F1-score or the accuracy metric, thereby providing valuable insights for their application in various Intelligent Environment contexts.
引用
收藏
页码:70 / 77
页数:8
相关论文
共 50 条
  • [1] Blending Shapley values for feature ranking in machine learning: an analysis on educational data
    Guleria P.
    Neural Computing and Applications, 2024, 36 (23) : 14093 - 14117
  • [2] Shapley Values with Uncertain Value Functions
    Heese, Raoul
    Muecke, Sascha
    Jakobs, Matthias
    Gerlach, Thore
    Piatkowski, Nico
    ADVANCES IN INTELLIGENT DATA ANALYSIS XXI, IDA 2023, 2023, 13876 : 156 - 168
  • [3] Explainable Prediction of Acute Myocardial Infarction Using Machine Learning and Shapley Values
    Ibrahim, Lujain
    Mesinovic, Munib
    Yang, Kai-Wen
    Eid, Mohamad A.
    IEEE ACCESS, 2020, 8 : 210410 - 210417
  • [4] Explaining quantum circuits with Shapley values: towards explainable quantum machine learning
    Heese, Raoul
    Gerlach, Thore
    Muecke, Sascha
    Mueller, Sabine
    Jakobs, Matthias
    Piatkowski, Nico
    QUANTUM MACHINE INTELLIGENCE, 2025, 7 (01)
  • [5] Predicting Swarm Equatorial Plasma Bubbles via Machine Learning and Shapley Values
    Reddy, S. A.
    Forsyth, C.
    Aruliah, A.
    Smith, A.
    Bortnik, J.
    Aa, E.
    Kataria, D. O.
    Lewis, G.
    JOURNAL OF GEOPHYSICAL RESEARCH-SPACE PHYSICS, 2023, 128 (06)
  • [6] Integrating Shapley Values into Machine Learning Techniques for Enhanced Predictions of Hospital Admissions
    Feretzakis, Georgios
    Sakagianni, Aikaterini
    Anastasiou, Athanasios
    Kapogianni, Ioanna
    Bazakidou, Effrosyni
    Koufopoulos, Petros
    Koumpouros, Yiannis
    Koufopoulou, Christina
    Kaldis, Vasileios
    Verykios, Vassilios S.
    APPLIED SCIENCES-BASEL, 2024, 14 (13):
  • [7] A machine learning research template for binary classification problems and shapley values integration
    Smith, Matthew
    Alvarez, Francisco
    SOFTWARE IMPACTS, 2021, 8
  • [8] Explaining Reinforcement Learning with Shapley Values
    Beechey, Daniel
    Smith, Thomas M. S.
    Simsek, Ozgur
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [9] Explaining a Machine-Learning Lane Change Model With Maximum Entropy Shapley Values
    Li, Meng
    Wang, Yulei
    Sun, Hengyang
    Cui, Zhihao
    Huang, Yanjun
    Chen, Hong
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (06): : 3620 - 3628
  • [10] The Shapley function for fuzzy games with fuzzy characteristic functions
    Meng, Fanyong
    Zhao, Jinxian
    Zhang, Qiang
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2013, 25 (01) : 23 - 35