Evaluating Explainability Methods Intended for Multiple Stakeholders

被引:0
|
作者
Kyle Martin
Anne Liret
Nirmalie Wiratunga
Gilbert Owusu
Mathias Kern
机构
[1] Robert Gordon University,BT France
[2] Tour Ariane,undefined
[3] BT Applied Research,undefined
来源
关键词
Machine learning; Similarity modeling; Explainability; Information retrieval;
D O I
暂无
中图分类号
学科分类号
摘要
Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.
引用
收藏
页码:397 / 411
页数:14
相关论文
共 50 条
  • [1] Evaluating Explainability Methods Intended for Multiple Stakeholders
    Martin, Kyle
    Liret, Anne
    Wiratunga, Nirmalie
    Owusu, Gilbert
    Kern, Mathias
    [J]. KUNSTLICHE INTELLIGENZ, 2021, 35 (3-4): : 397 - 411
  • [2] Evaluating the Explainability of Neural Rankers
    Pandian, Saran
    Ganguly, Debasis
    MacAvaney, Sean
    [J]. ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT IV, 2024, 14611 : 369 - 383
  • [3] Evaluating the effectiveness of teacher education in Oman: a multiple case study of multiple stakeholders
    Al-Harthi, Aisha Salim
    Hammad, Waheed
    Al-Seyabi, Fawzia
    Al-Najjar, Noor
    Al-Balushi, Sulaiman
    Emam, Mahmoud
    [J]. QUALITY ASSURANCE IN EDUCATION, 2022, 30 (04) : 477 - 494
  • [4] Evaluating explainability for graph neural networks
    Agarwal, Chirag
    Queen, Owen
    Lakkaraju, Himabindu
    Zitnik, Marinka
    [J]. SCIENTIFIC DATA, 2023, 10 (01)
  • [5] Evaluating explainability for graph neural networks
    Chirag Agarwal
    Owen Queen
    Himabindu Lakkaraju
    Marinka Zitnik
    [J]. Scientific Data, 10
  • [6] The Issue of Baselines in Explainability Methods
    Ioannou, George
    Stafylopatis, Andreas
    [J]. 2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 958 - 965
  • [7] Evaluating Search System Explainability with Psychometrics and Crowdsourcing
    Chen, Catherine
    Eickhoff, Carsten
    [J]. PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1051 - 1061
  • [8] Modeling and Evaluating Personas with Software Explainability Requirements
    Ramos, Henrique
    Fonseca, Mateus
    Ponciano, Lesandro
    [J]. HUMAN-COMPUTER INTERACTION, HCI-COLLAB, 2021, 1478 : 136 - 149
  • [9] Evaluating Neighbor Explainability for Graph Neural Networks
    Llorente, Oscar
    Fawzy, Rana
    Keown, Jared
    Horemuz, Michal
    Vaderna, Peter
    Laki, Sandor
    Kotroczo, Roland
    Csoma, Rita
    Szalai-Gindl, Janos Mark
    [J]. EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT I, XAI 2024, 2024, 2153 : 383 - 402
  • [10] Evaluating the performance of sustainable development in urban neighborhoods based on the feedback of multiple stakeholders
    Karatas, Aslihan
    El-Rayes, Khaled
    [J]. SUSTAINABLE CITIES AND SOCIETY, 2015, 14 : 374 - 382