Interpretability and Explainability of LSP Evaluation Criteria

被引:7
|
作者
Dujmovic, Jozo [1 ]
机构
[1] San Francisco State Univ, Dept Comp Sci, San Francisco, CA 94132 USA
关键词
D O I
10.1109/fuzz48607.2020.9177578
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Logic Scoring of Preference (LSP) is a soft computing decision method for evaluation and selection of complex objects and alternatives using logic criteria. LSP criteria are based on graded logic aggregation structures that in most cases have a canonical form of tree. Such trees aggregate degrees of truth or degrees of fuzzy membership. The aggregators are graded logic functions. Each node in the tree of logic aggregators has a specific semantic identity - it has the interpretation, role, meaning, and importance for the decision maker. The semantic identity of all arguments can be used to develop explainable LSP criteria, and to provide explanation of all results generated in the process of evaluation, comparison, and selection of complex objects and alternatives. In this paper we propose explainability parameters and use them in decision making to provide the explainability of both the results of evaluation of a single object, and the results of comparison and selection of multiple competitive alternatives.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Explainability Audit: An Automated Evaluation of Local Explainability in Rooftop Image Classification
    Don, Duleep Rathamage
    Boardman, Jonathan
    Sayenju, Sudhashree
    Aygun, Ramazan
    Zhang, Yifan
    Franks, Bill
    Johnston, Sereres
    Lee, George
    Sullivan, Dan
    Modgil, Girish
    2022 IEEE 23RD INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE (IRI 2022), 2022, : 184 - 189
  • [32] Process Knowledge-Infused AI: Toward User-Level Explainability, Interpretability, and Safety
    Sheth, Amit
    Gaur, Manas
    Roy, Kaushik
    Venkataraman, Revathy
    Khandelwal, Vedant
    IEEE INTERNET COMPUTING, 2022, 26 (05) : 76 - 84
  • [33] Enhancing explainability in real-world scenarios: Towards a robust stability measure for local interpretability
    Sepulveda, Eduardo
    Vandervorst, Felix
    Baesens, Bart
    Verdonck, Tim
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 274
  • [34] The Solvability of Interpretability Evaluation Metrics
    Zhou, Yilun
    Shah, Julie
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2399 - 2415
  • [35] An empirical analysis of assessment Errors for weights and andness in LSP criteria
    Dujmovic, JJ
    Fang, WY
    MODELING DECISIONS FOR ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, 3131 : 139 - 150
  • [36] Trading-Off Interpretability and Accuracy in Medical Applications: A Study Toward Optimal Explainability of Hoeffding Trees
    Sharma, Arnab
    Leite, Daniel
    Demir, Caglar
    Ngomo, Axel-Cyrille Ngonga
    2024 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ-IEEE 2024, 2024,
  • [37] How to Build Self-Explaining Fuzzy Systems: From Interpretability to Explainability [AI-eXplained]
    Stepin, Ilia
    Suffian, Muhammad
    Catala, Alejandro
    Alonso-Moral, Jose M.
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2024, 19 (01) : 81 - 82
  • [38] Deep Learning Model for Interpretability and Explainability of Aspect-Level Sentiment Analysis Based on Social Media
    Singh, Nikhil Kumar
    Agal, Sanjay
    Gadekallu, Thippa Reddy
    Shabaz, Mohammad
    Keshta, Ismail
    Jindal, Latika
    Soni, Mukesh
    Byeon, Haewon
    Singh, Pavitar Parkash
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, : 1 - 12
  • [39] DEFINING THE TRANSPARENCY, EXPLAINABILITY, AND INTERPRETABILITY OF ALGORITHMS: A KEY STEP TOWARDS FAIR AND JUST DECISION-MAKING
    Tomova, Georgia D.
    Gilthorpe, Mark S.
    Bruneau, Gabriela C. Arriagada
    Tennant, Peter W. G.
    JOURNAL OF EPIDEMIOLOGY AND COMMUNITY HEALTH, 2022, 76 : A75 - A76
  • [40] Automatic Evaluation of Integrated Reports with Interpretability
    Kawamura, Kohei
    Sakai, Hiroyuki
    Enami, Kengo
    Takano, Kaito
    Nakagawa, Kei
    Transactions of the Japanese Society for Artificial Intelligence, 2024, 39 (04)