Interpretability and Explainability of LSP Evaluation Criteria

被引:7
|
作者
Dujmovic, Jozo [1 ]
机构
[1] San Francisco State Univ, Dept Comp Sci, San Francisco, CA 94132 USA
关键词
D O I
10.1109/fuzz48607.2020.9177578
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Logic Scoring of Preference (LSP) is a soft computing decision method for evaluation and selection of complex objects and alternatives using logic criteria. LSP criteria are based on graded logic aggregation structures that in most cases have a canonical form of tree. Such trees aggregate degrees of truth or degrees of fuzzy membership. The aggregators are graded logic functions. Each node in the tree of logic aggregators has a specific semantic identity - it has the interpretation, role, meaning, and importance for the decision maker. The semantic identity of all arguments can be used to develop explainable LSP criteria, and to provide explanation of all results generated in the process of evaluation, comparison, and selection of complex objects and alternatives. In this paper we propose explainability parameters and use them in decision making to provide the explainability of both the results of evaluation of a single object, and the results of comparison and selection of multiple competitive alternatives.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Design, Interpretability, and Explainability of Models in the Framework of Granular Computing and Federated Learning
    Pedrycz, Witold
    2021 IEEE CONFERENCE ON NORBERT WIENER IN THE 21ST CENTURY (21CW): BEING HUMAN IN A GLOBAL VILLAGE, 2021,
  • [22] Explainability and Interpretability in Electric Load Forecasting Using Machine Learning Techniques - A Review
    Baur, Lukas
    Ditschuneit, Konstantin
    Schambach, Maximilian
    Kaymakci, Can
    Wollmann, Thomas
    Sauer, Alexander
    ENERGY AND AI, 2024, 16
  • [23] A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging
    Champendal, Melanie
    Muller, Henning
    Prior, John O.
    dos Reis, Claudia Sa
    EUROPEAN JOURNAL OF RADIOLOGY, 2023, 169
  • [24] Deep learning in radiology: ethics of data and on the value of algorithm transparency, interpretability and explainability
    Alvaro Fernandez-Quilez
    AI and Ethics, 2023, 3 (1): : 257 - 265
  • [25] TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models
    Chatterjee, Soumick
    Das, Arnab
    Mandal, Chirag
    Mukhopadhyay, Budhaditya
    Vipinraj, Manish
    Shukla, Aniruddh
    Rao, Rajatha Nagaraja
    Sarasaen, Chompunuch
    Speck, Oliver
    Nuernberger, Andreas
    APPLIED SCIENCES-BASEL, 2022, 12 (04):
  • [26] How to Evaluate Explainability? - A Case for Three Criteria
    Speith, Timo
    2022 IEEE 30TH INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS (REW), 2022, : 92 - 97
  • [27] Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning
    Emrullah ŞAHiN
    Naciye Nur Arslan
    Durmuş Özdemir
    Neural Computing and Applications, 2025, 37 (2) : 859 - 965
  • [28] Evolving multi-user fuzzy classifier system with advanced explainability and interpretability aspects
    Lughofer, Edwin
    Pratama, Mahardhika
    INFORMATION FUSION, 2023, 91 : 458 - 476
  • [29] A Light-weight Method to Foster the (Grad)CAM Interpretability and Explainability of Classification Networks
    Schoettl, Alfred
    2020 10TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER INFORMATION TECHNOLOGIES (ACIT), 2020, : 348 - 351
  • [30] Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare☆
    Salvi, Massimo
    Seoni, Silvia
    Campagner, Andrea
    Gertych, Arkadiusz
    Acharya, U. Rajendra
    Molinari, Filippo
    Cabitza, Federico
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2025, 197