A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence

被引:5
|
作者
Vilone, Giulia [1 ]
Longo, Luca [1 ]
机构
[1] Technol Univ Dublin, Sch Comp Sci, Appl Intelligence Res Ctr, Artificial Intelligence & Cognit Load Res Lab, Dublin, Ireland
关键词
Explainable Artificial Intelligence; Argumentation; Human-centred evaluation; Non-monotonic reasoning; Explainability;
D O I
10.1007/978-3-031-08333-4_36
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One of the aim of Explainable Artificial Intelligence (XAI) is to equip data-driven, machine-learned models with a high degree of explainability for humans. Understanding and explaining the inferences of a model can be seen as a defeasible reasoning process. This process is likely to be non-monotonic: a conclusion, linked to a set of premises, can be retracted when new information becomes available. In formal logic, computational argumentation is a method, within Artificial Intelligence (AI), focused on modeling defeasible reasoning. This research study focuses on the automatic formation of an argument-based representation for a machine-learned model in order to enhance its degree of explainability, by employing principles and techniques from computational argumentation. It also contributes to the body of knowledge by introducing a novel quantitative human-centred technique to evaluate such a novel representation, and potentially other XAI methods, in the form of a questionnaire for explainability. An experiment have been conducted with two groups of human participants, one interacting with the argument-based representation, and one with a decision trees, a representation deemed naturally transparent and comprehensible. Findings demonstrate that the explainability of the original argument-based representation is statistically similar to that associated to the decision-trees, as reported by humans via the novel questionnaire.
引用
收藏
页码:447 / 460
页数:14
相关论文
共 50 条
  • [1] Human-centred explanations for artificial intelligence systems
    Baber, C.
    Kandola, P.
    Apperly, I.
    Mccormick, E.
    [J]. ERGONOMICS, 2024,
  • [2] Conversational Interfaces for Explainable AI: A Human-Centred Approach
    Jentzsch, Sophie F.
    Hoehn, Sviatlana
    Hochgeschwender, Nico
    [J]. EXPLAINABLE, TRANSPARENT AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2019, 2019, 11763 : 77 - 92
  • [4] Human-centred artificial intelligence: a contextual morality perspective
    van Berkel, Niels
    Tag, Benjamin
    Goncalves, Jorge
    Hosio, Simo
    [J]. BEHAVIOUR & INFORMATION TECHNOLOGY, 2022, 41 (03) : 502 - 518
  • [5] Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
    Donoso-Guzman, Ivania
    Ooge, Jeroen
    Parra, Denis
    Verbert, Katrien
    [J]. EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III, 2023, 1903 : 183 - 204
  • [6] Human-centred artificial intelligence for mobile health sensing: challenges and opportunities
    Dang, Ting
    Spathis, Dimitris
    Ghosh, Abhirup
    Mascolo, Cecilia
    [J]. ROYAL SOCIETY OPEN SCIENCE, 2023, 10 (11):
  • [7] THE CRISIS OF ARTIFICIAL INTELLIGENCE: A NEW DIGITAL HUMANITIES CURRICULUM FOR HUMAN-CENTRED AI
    Chun, Jon
    Elkins, Katherine
    [J]. INTERNATIONAL JOURNAL OF HUMANITIES AND ARTS COMPUTING-A JOURNAL OF DIGITAL HUMANITIES, 2023, 17 (02): : 147 - 167
  • [8] Seeking Strategic Advantage: The Potential of Combining Artificial Intelligence and Human-centred Wargaming
    Barzashka, Ivanka
    [J]. RUSI JOURNAL, 2023, 168 (07): : 26 - 32
  • [9] Evaluation of Validity and Validation by Means of the Argument-based Approach
    Wools, Saskia
    Eggen, Theo
    Sanders, Piet
    [J]. CADMO, 2010, 18 (01): : 63 - +
  • [10] Developing an Artificial Intelligence-Driven Nudge Intervention to Improve Medication Adherence: A Human-Centred Design Approach
    Sumner, Jennifer
    Bundele, Anjali
    Lim, Hui Wen
    Phan, Phillip
    Motani, Mehul
    Mukhopadhyay, Amartya
    [J]. JOURNAL OF MEDICAL SYSTEMS, 2023, 48 (01)