A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence

被引:5
|
作者
Vilone, Giulia [1 ]
Longo, Luca [1 ]
机构
[1] Technol Univ Dublin, Sch Comp Sci, Appl Intelligence Res Ctr, Artificial Intelligence & Cognit Load Res Lab, Dublin, Ireland
关键词
Explainable Artificial Intelligence; Argumentation; Human-centred evaluation; Non-monotonic reasoning; Explainability;
D O I
10.1007/978-3-031-08333-4_36
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the aim of Explainable Artificial Intelligence (XAI) is to equip data-driven, machine-learned models with a high degree of explainability for humans. Understanding and explaining the inferences of a model can be seen as a defeasible reasoning process. This process is likely to be non-monotonic: a conclusion, linked to a set of premises, can be retracted when new information becomes available. In formal logic, computational argumentation is a method, within Artificial Intelligence (AI), focused on modeling defeasible reasoning. This research study focuses on the automatic formation of an argument-based representation for a machine-learned model in order to enhance its degree of explainability, by employing principles and techniques from computational argumentation. It also contributes to the body of knowledge by introducing a novel quantitative human-centred technique to evaluate such a novel representation, and potentially other XAI methods, in the form of a questionnaire for explainability. An experiment have been conducted with two groups of human participants, one interacting with the argument-based representation, and one with a decision trees, a representation deemed naturally transparent and comprehensible. Findings demonstrate that the explainability of the original argument-based representation is statistically similar to that associated to the decision-trees, as reported by humans via the novel questionnaire.
引用
下载
收藏
页码:447 / 460
页数:14
相关论文
共 50 条
  • [41] Appendicitis Diagnosis: Ensemble Machine Learning and Explainable Artificial Intelligence-Based Comprehensive Approach
    Gollapalli, Mohammed
    Rahman, Atta
    Kudos, Sheriff A.
    Foula, Mohammed S.
    Alkhalifa, Abdullah Mahmoud
    Albisher, Hassan Mohammed
    Al-Hariri, Mohammed Taha
    Mohammad, Nazeeruddin
    Big Data and Cognitive Computing, 2024, 8 (09)
  • [42] Evaluation of Tropical Cyclone Disaster Loss Using Machine Learning Algorithms with an eXplainable Artificial Intelligence Approach
    Liu, Shuxian
    Liu, Yang
    Chu, Zhigang
    Yang, Kun
    Wang, Guanlan
    Zhang, Lisheng
    Zhang, Yuanda
    SUSTAINABILITY, 2023, 15 (16)
  • [43] Attack stage detection method based on vector reconstruction error autoencoder and explainable artificial intelligence
    Jinze Li
    Xiangyu Meng
    Zichen Qi
    Dong Guo
    Cong Fu
    The Journal of Supercomputing, 2025, 81 (1)
  • [44] A Novel Adaptive Manufacturing System Based on Artificial Intelligence Approach
    Gu Wenbin
    Wang Dengyang
    Wang Yi
    2016 IEEE INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC), 2016, : 1056 - 1059
  • [45] A novel study on activated carbon production based on artificial neural network model: An experimental and artificial intelligence method approach
    Wang, Xuan
    Yang, Jiangping
    Yang, Xue
    Hu, Xin
    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, 2022, 46 (15) : 21480 - 21496
  • [46] Explainable Artificial Intelligence to Detect Breast Cancer: A Qualitative Case-Based Visual Interpretability Approach
    Rodriguez-Sampaio, M.
    Rincon, M.
    Valladares-Rodriguez, S.
    Bachiller-Mayoral, M.
    ARTIFICIAL INTELLIGENCE IN NEUROSCIENCE: AFFECTIVE ANALYSIS AND HEALTH APPLICATIONS, PT I, 2022, 13258 : 557 - 566
  • [47] DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
    Wani, Niyaz Ahmad
    Kumar, Ravinder
    Bedi, Jatin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 243
  • [48] A New Approach to Spatial Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence
    Fang, Haoran
    Shao, Yun
    Xie, Chou
    Tian, Bangsen
    Shen, Chaoyong
    Zhu, Yu
    Guo, Yihong
    Yang, Ying
    Chen, Guanwen
    Zhang, Ming
    SUSTAINABILITY, 2023, 15 (04)
  • [49] An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks
    Bhakte, Abhijit
    Pakkiriswamy, Venkatesh
    Srinivasan, Rajagopalan
    CHEMICAL ENGINEERING SCIENCE, 2022, 250
  • [50] Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence
    Hiroshi Fukatsu
    Shinji Naganawa
    Shinnichiro Yumura
    Radiation Medicine, 2008, 26 : 120 - 128