Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study

被引:0
|
作者
Heider, Michael [1 ]
Stegherr, Helena [1 ]
Nordsieck, Richard [2 ]
Haehner, Joerg [1 ]
机构
[1] Univ Augsburg, Organ Comp Grp, Augsburg, Germany
[2] Xitaso GmbH, IT & Software Solut, Augsburg, Germany
关键词
Rule-based learning; self-explaining; decision support; sociotechnical system; learning classifier system; explainable AI; KNOWLEDGE;
D O I
10.1162/artl_a_00414
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach's use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
引用
收藏
页码:468 / 486
页数:19
相关论文
共 50 条
  • [31] Explainable AI model for PDFMal detection based on gradient boosting model
    Elattar, Mona
    Younes, Ahmed
    Gad, Ibrahim
    Elkabani, Islam
    Neural Computing and Applications, 2024, 36 (34) : 21607 - 21622
  • [32] Explainable AI for Healthcare: A Study for Interpreting Diabetes Prediction
    Gandhi, Neel
    Mishra, Shakti
    MACHINE LEARNING AND BIG DATA ANALYTICS (PROCEEDINGS OF INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND BIG DATA ANALYTICS (ICMLBDA) 2021), 2022, 256 : 95 - 105
  • [33] Trust Indicators and Explainable AI: A Study on User Perceptions
    Ribes, Delphine
    Henchoz, Nicolas
    Portier, Helene
    Defayes, Lara
    Thanh-Trung Phan
    Gatica-Perez, Daniel
    Sonderegger, Andreas
    HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT II, 2021, 12933 : 662 - 671
  • [34] Explainable AI to understand study interest of engineering students
    Ghosh, Sourajit
    Kamal, Md. Sarwar
    Chowdhury, Linkon
    Neogi, Biswarup
    Dey, Nilanjan
    Sherratt, Robert Simon
    EDUCATION AND INFORMATION TECHNOLOGIES, 2024, 29 (04) : 4657 - 4672
  • [35] Explainable AI to understand study interest of engineering students
    Sourajit Ghosh
    Md. Sarwar Kamal
    Linkon Chowdhury
    Biswarup Neogi
    Nilanjan Dey
    Robert Simon Sherratt
    Education and Information Technologies, 2024, 29 : 4657 - 4672
  • [36] Towards unveiling sensitive and decisive patterns in explainable AI with a case study in geometric deep learning
    Zhu, Jiajun
    Miao, Siqi
    Ying, Rex
    Li, Pan
    NATURE MACHINE INTELLIGENCE, 2025, : 471 - 483
  • [37] Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life
    Kobayashi, Kazuma
    Alam, Syed Bahauddin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 129
  • [38] Can I Trust My Anomaly Detection System? A Case Study Based on Explainable AI
    Rashid, Muhammad
    Amparorel, Elvio
    Ferrari, Enrico
    Verda, Damiano
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2024, PT IV, 2024, 2156 : 243 - 254
  • [39] Explainable AI for Multimodal Credibility Analysis: Case Study of Online Beauty Health (Mis)-Information
    Wagle, Vidisha
    Kaur, Kulveen
    Kamat, Pooja
    Patil, Shruti
    Kotecha, Ketan
    IEEE ACCESS, 2021, 9 : 127985 - 128022
  • [40] Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study
    Milckel, Bjorn
    Dinglinger, Pascal
    Holtmann, Jonas
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2024, PT IV, 2024, 2156 : 331 - 352