Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study

被引:0
|
作者
Heider, Michael [1 ]
Stegherr, Helena [1 ]
Nordsieck, Richard [2 ]
Haehner, Joerg [1 ]
机构
[1] Univ Augsburg, Organ Comp Grp, Augsburg, Germany
[2] Xitaso GmbH, IT & Software Solut, Augsburg, Germany
关键词
Rule-based learning; self-explaining; decision support; sociotechnical system; learning classifier system; explainable AI; KNOWLEDGE;
D O I
10.1162/artl_a_00414
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach's use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
引用
收藏
页码:468 / 486
页数:19
相关论文
共 50 条
  • [21] An explainable AI (XAI) model for landslide susceptibility modeling
    Pradhan, Biswajeet
    Dikshit, Abhirup
    Lee, Saro
    Kim, Hyesu
    APPLIED SOFT COMPUTING, 2023, 142
  • [22] An Explainable AI Model for Interpretable Lung Disease Classification
    Pitroda, Vidhi
    Fouda, Mostafa M.
    Fadlullah, Zubair Md
    2021 IEEE INTERNATIONAL CONFERENCE ON INTERNET OF THINGS AND INTELLIGENCE SYSTEMS (IOTAIS), 2021, : 98 - 103
  • [23] Hybrid AI based stroke characterization with explainable model
    Patil, R.
    Shreya, A.
    Maulik, P.
    Chaudhury, S.
    JOURNAL OF THE NEUROLOGICAL SCIENCES, 2019, 405
  • [24] A Federated Explainable AI Model for Breast Cancer Classification
    Briola, Eleni
    Nikolaidis, Christos Chrysanthos
    Perifanis, Vasileios
    Pavlidis, Nikolaos
    Efraimidis, Pavlos S.
    PROCEEDINGS OF THE 2024 EUROPEAN INTERDISCIPLINARY CYBERSECURITY CONFERENCE, EICC 2024, 2024, : 194 - 201
  • [25] Explainable AI for Fair Sepsis Mortality Predictive Model
    Chang, Chia-Hsuan
    Wang, Xiaoyang
    Yang, Christopher C.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PT II, AIME 2024, 2024, 14845 : 267 - 276
  • [26] Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images
    Rasaee, Hamza
    Rivaz, Hassan
    INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS 2021), 2021,
  • [27] On Identifying Effective Investigations with Feature Finding Using Explainable AI: An Ophthalmology Case Study
    Kumar, Rathika Suresh
    Li, Kelvin Zhenghao
    Chia, Si Yin Charlene
    Wang, Li Rong
    Yip, Vivien Cherng Hui
    Ngo, Wei Kiong
    Ng, Yih Yng
    Fan, Xiuyi
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PT II, AIME 2024, 2024, 14845 : 324 - 334
  • [29] A Novel Explainable AI Model for Medical Data Analysis
    Shakhovska, Nataliya
    Shebeko, Andrii
    Prykarpatskyy, Yarema
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2024, 14 (02) : 121 - 137
  • [30] Revealing key factors of heat-related illnesses using geospatial explainable AI model: A case study in Texas, USA
    Foroutan, Ehsan
    Hu, Tao
    Li, Ziqi
    SUSTAINABLE CITIES AND SOCIETY, 2025, 122