Constructing Interpretable Belief Rule Bases Using a Model-Agnostic Statistical Approach

被引:0
|
作者
Sun, Chao [1 ]
Wang, Yinghui [1 ]
Yan, Tao [1 ]
Yang, Jinlong [1 ]
Huang, Liangyi [2 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
[2] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
基金
中国国家自然科学基金;
关键词
Data models; Knowledge based systems; Parameter extraction; Fuzzy systems; Feature extraction; Explosions; Cognition; Belief rule base (BRB); data-driven; explainable artificial intelligence (XAI); interpretability; model-agnostic; EVIDENTIAL REASONING APPROACH; SYSTEM;
D O I
10.1109/TFUZZ.2024.3416448
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Belief rule base (BRB) has attracted considerable interest due to its interpretability and exceptional modeling accuracy. Generally, BRB construction relies on prior knowledge or historical data. The limitations of knowledge constrain the knowledge-based BRB and are unsuitable for use in large-scale rule bases. Data-driven techniques excel at extracting model parameters from data, thus significantly improving the accuracy of BRB. However, the previous data-based BRBs neglected the study of interpretability, and some still depend on prior knowledge or introduce additional parameters. All these factors make the BRB highly problem-specific and limit its broad applicability. To address these problems, a model-agnostic statistical BRB (MAS-BRB) modeling approach is proposed in this article. It adopts an MAS methodology for parameter extraction, ensuring that the parameters both fulfill their intended roles within the BRB framework and accurately represent complex, nonlinear data relationships. A comprehensive interpretability analysis of MAS-BRB components further confirms their compliance with established BRB interpretability standards. Experiments conducted on multiple public datasets demonstrate that MAS-BRB not only achieves improved modeling performance but also shows greater effectiveness compared to existing rule-based and traditional machine learning models.
引用
收藏
页码:5163 / 5175
页数:13
相关论文
共 50 条
  • [1] MAIRE - A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers
    Sharma, Rajat
    Reddy, Nikhil
    Kamakshi, Vidhya
    Krishnan, Narayanan C.
    Jain, Shweta
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION (CD-MAKE 2021), 2021, 12844 : 329 - 349
  • [2] Interpretable heartbeat classification using local model-agnostic explanations on ECGs
    Neves, Ines
    Folgado, Duarte
    Santos, Sara
    Barandas, Marilia
    Campagner, Andrea
    Ronzio, Luca
    Cabitza, Federico
    Gamboa, Hugo
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 133
  • [3] LIVE: A Local Interpretable model-agnostic Visualizations and Explanations
    Shi, Peichang
    Gangopadhyay, Aryya
    Yu, Ping
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 245 - 254
  • [4] Responsible Music Genre Classification Using Interpretable Model-Agnostic Visual Explainers
    Sudi Murindanyi
    Kyamanywa Hamza
    Sulaiman Kagumire
    Ggaliwango Marvin
    SN Computer Science, 6 (1)
  • [5] Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
    Zafar, Muhammad Rehman
    Khan, Naimul
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2021, 3 (03): : 525 - 541
  • [6] Causality-Aware Local Interpretable Model-Agnostic Explanations
    Cinquin, Martina
    Guidotti, Riccardo
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT III, XAI 2024, 2024, 2155 : 108 - 124
  • [7] Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review
    Hassan, Shahab Ul
    Abdulkadir, Said Jadid
    Zahid, M Soperi Mohd
    Al-Selwi, Safwan Mahmood
    Computers in Biology and Medicine, 2025, 185
  • [8] Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases
    de Sousa, Iam Palatnik
    Bernardes Rebuzzi Vellasco, Marley Maria
    da Silva, Eduardo Costa
    SENSORS, 2019, 19 (13)
  • [9] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xiang, Xu
    Yu, Hong
    Wang, Ye
    Wang, Guoyin
    APPLIED INTELLIGENCE, 2023, 53 (23) : 28226 - 28240
  • [10] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xu Xiang
    Hong Yu
    Ye Wang
    Guoyin Wang
    Applied Intelligence, 2023, 53 : 28226 - 28240