Explainable Machine Learning via Argumentation

被引:1
|
作者
Prentzas, Nicoletta [1 ]
Pattichis, Constantinos [1 ,2 ]
Kakas, Antonis [1 ]
机构
[1] Univ Cyprus, 1 Panepistimiou Ave, CY-2109 Nicosia, Cyprus
[2] CYENS Ctr Excellence, 23 Dimarchou Lellou Demetriadi, CY-1016 Nicosia, Cyprus
关键词
Argumentation in Machine Learning; Explainable Machine Learning; Explainable Conflict Resolution; RULES;
D O I
10.1007/978-3-031-44070-0_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a general Explainable Machine Learning framework and methodology based on Argumentation (ArgEML). The flexible reasoning form of argumentation in the face of unknown and incomplete information together with the direct link of argumentation to justification and explanation enables the development of a natural form of explainablemachine learning. In this form of learning the explanations are useful not only for supporting the final predictions but also play a significant role in the learning process itself. The paper defines the basic theoretical notions of ArgEML together with its main machine learning operators and method of application. It describes how such an argumentation-based approach can give a flexible way for learning that recognizes difficult cases (with respect to the current available training data) and separates these cases out not as definite predictive cases but as cases where it is more appropriate to explainably analyze the alternative predictions. Using the argumentation-based explanations we can partition the problem space into groups characterized by the basic argumentative tension between arguments for and against the alternatives. The paper presents a first evaluation of the approach by applying the ArgEML learning methodology both on artificial and on real-life datasets.
引用
收藏
页码:371 / 398
页数:28
相关论文
共 50 条
  • [41] Evaluating Explainable Machine Learning Models for Clinicians
    Scarpato, Noemi
    Nourbakhsh, Aria
    Ferroni, Patrizia
    Riondino, Silvia
    Roselli, Mario
    Fallucchi, Francesca
    Barbanti, Piero
    Guadagni, Fiorella
    Zanzotto, Fabio Massimo
    COGNITIVE COMPUTATION, 2024, 16 (04) : 1436 - 1446
  • [42] Explainable Machine Learning in the Research of Materials Science
    Wang, Guanjie
    Liu, Shengxian
    Zhou, Jian
    Sun, Zhimei
    ACTA METALLURGICA SINICA, 2024, 60 (10) : 1345 - 1361
  • [43] Review of explainable machine learning for anaerobic digestion
    Gupta, Rohit
    Zhang, Le
    Hou, Jiayi
    Zhang, Zhikai
    Liu, Hongtao
    You, Siming
    Ok, Yong Sik
    Li, Wangliang
    BIORESOURCE TECHNOLOGY, 2023, 369
  • [44] Explainable machine learning for motor fault diagnosis
    Wang, Yuming
    Wang, Peng
    2023 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC, 2023,
  • [45] Loan default predictability with explainable machine learning
    Li, Huan
    Wu, Weixing
    FINANCE RESEARCH LETTERS, 2024, 60
  • [46] Explainable and interpretable machine learning and data mining
    Atzmueller, Martin
    Fuernkranz, Johannes
    Kliegr, Tomas
    Schmid, Ute
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (05) : 2571 - 2595
  • [47] Understanding Online Purchases with Explainable Machine Learning
    Bastos, Joao A.
    Bernardes, Maria Ines
    INFORMATION, 2024, 15 (10)
  • [48] Explainable machine learning for project management control
    Ignacio Santos, Jose
    Pereda, Maria
    Ahedo, Virginia
    Manuel Galan, Jose
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 180
  • [49] Predicting Dengue Outbreaks with Explainable Machine Learning
    Aleixo, Robson
    Kon, Fabio
    Rocha, Rudi
    Camargo, Marcela Santos
    de Camargo, Raphael Y.
    2022 22ND IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2022), 2022, : 940 - 947
  • [50] NOVA - A tool for eXplainable Cooperative Machine Learning
    Heimerl, Alexander
    Baur, Tobias
    Lingenfelser, Florian
    Wagner, Johannes
    Andre, Elisabeth
    2019 8TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2019,