"How Good Is Your Explanation?": Towards a Standardised Evaluation Approach for Diverse XAI Methods on Multiple Dimensions of Explainability

被引:0
|
作者
Bhattacharya, Aditya [1 ]
Verbert, Katrien [1 ]
机构
[1] Katholieke Univ Leuven, Leuven, Belgium
关键词
Explainable AI; XAI; Explainable AI Evaluation;
D O I
10.1145/3631700.3664911
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial Intelligence (AI) systems involve diverse components, such as data, models, users and predicted outcomes. To elucidate these different aspects of AI systems, multifaceted explanations that combine diverse explainable AI (XAI) methods are beneficial. However, popularly adopted user-centric XAI evaluation methods do not measure these explanations across the different components of the system. In this position paper, we advocate for an approach tailored to evaluate XAI methods considering the diverse dimensions of explainability within AI systems using a normalised scale. We argue that the prevalent user-centric evaluation methods fall short of facilitating meaningful comparisons across different types of XAI methodologies. Moreover, we discuss the potential advantages of adopting a standardised approach, which would enable comprehensive evaluations of explainability across systems. By considering various dimensions of explainability, such as data, model, predictions, and target users, a standardised evaluation approach promises to facilitate both inter-system and intra-system comparisons for user-centric AI systems.
引用
收藏
页码:513 / 515
页数:3
相关论文
共 1 条
  • [1] Why Should I Trust Your Explanation? An Evaluation Approach for XAI Methods Applied to Predictive Process Monitoring Results
    Elkhawaga G.
    Elzeki O.M.
    Abu-Elkheir M.
    Reichert M.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1458 - 1472