Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study

被引:0
|
作者
Heider, Michael [1 ]
Stegherr, Helena [1 ]
Nordsieck, Richard [2 ]
Haehner, Joerg [1 ]
机构
[1] Univ Augsburg, Organ Comp Grp, Augsburg, Germany
[2] Xitaso GmbH, IT & Software Solut, Augsburg, Germany
关键词
Rule-based learning; self-explaining; decision support; sociotechnical system; learning classifier system; explainable AI; KNOWLEDGE;
D O I
10.1162/artl_a_00414
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach's use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
引用
收藏
页码:468 / 486
页数:19
相关论文
共 50 条
  • [41] Interpretable and explainable AI (XAI) model for spatial drought prediction
    Dikshit, Abhirup
    Pradhan, Biswajeet
    SCIENCE OF THE TOTAL ENVIRONMENT, 2021, 801 (801)
  • [42] Exemplary music educator: A case study
    King, G
    BULLETIN OF THE COUNCIL FOR RESEARCH IN MUSIC EDUCATION, 1998, (137) : 57 - 72
  • [43] An Explainable AI Framework for Treatment Failure Model for Oncology Patients
    Zaidi, Syed Hamail Hussain
    Hashmat, Bilal
    Farooq, Muddassar
    EXPLAINABLE ARTIFICIAL INTELLIGENCE AND PROCESS MINING APPLICATIONS FOR HEALTHCARE, XAI-HEALTHCARE 2023 & PM4H 2023, 2024, 2020 : 25 - 35
  • [44] An Explainable AI-Based Fault Diagnosis Model for Bearings
    Hasan, Md Junayed
    Sohaib, Muhammad
    Kim, Jong-Myon
    SENSORS, 2021, 21 (12)
  • [45] A predictive model for damp risk in english housing with explainable AI
    Gulala Aziz
    Adam Hardy
    Scientific Reports, 15 (1)
  • [46] Explainable AI-driven model for gastrointestinal cancer classification
    Binzagr, Faisal
    FRONTIERS IN MEDICINE, 2024, 11
  • [47] Explainable AI decision model for ECG data of cardiac disorders
    Anand, Atul
    Kadian, Tushar
    Shetty, Manu Kumar
    Gupta, Anubha
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 75
  • [48] Towards reliable and explainable AI model for pulmonary nodule diagnosis
    Wang, Chenglong
    Liu, Yun
    Wang, Fen
    Zhang, Chengxiu
    Wang, Yida
    Yuan, Mei
    Yang, Guang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 88
  • [49] An explainable AI model for power plant NOx emission control
    Zhou, Yuanye
    Aslanidou, Ioanna
    Karlsson, Mikael
    Kyprianidis, Konstantinos
    ENERGY AND AI, 2024, 15
  • [50] AI in assessing Industry 4.0 adoption in Colombia: a case study approach
    Cruz Salazar, Luis Alberto
    Gil, Santiago
    Rueda Carvajal, German Dario
    Sanchez-Zuluaga, Gabriel J.
    Zapata-Madrigal, German D.
    IFAC PAPERSONLINE, 2024, 58 (08): : 162 - 167