Constructing Interpretable Belief Rule Bases Using a Model-Agnostic Statistical Approach

被引:0
|
作者
Sun, Chao [1 ]
Wang, Yinghui [1 ]
Yan, Tao [1 ]
Yang, Jinlong [1 ]
Huang, Liangyi [2 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
[2] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
基金
中国国家自然科学基金;
关键词
Data models; Knowledge based systems; Parameter extraction; Fuzzy systems; Feature extraction; Explosions; Cognition; Belief rule base (BRB); data-driven; explainable artificial intelligence (XAI); interpretability; model-agnostic; EVIDENTIAL REASONING APPROACH; SYSTEM;
D O I
10.1109/TFUZZ.2024.3416448
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Belief rule base (BRB) has attracted considerable interest due to its interpretability and exceptional modeling accuracy. Generally, BRB construction relies on prior knowledge or historical data. The limitations of knowledge constrain the knowledge-based BRB and are unsuitable for use in large-scale rule bases. Data-driven techniques excel at extracting model parameters from data, thus significantly improving the accuracy of BRB. However, the previous data-based BRBs neglected the study of interpretability, and some still depend on prior knowledge or introduce additional parameters. All these factors make the BRB highly problem-specific and limit its broad applicability. To address these problems, a model-agnostic statistical BRB (MAS-BRB) modeling approach is proposed in this article. It adopts an MAS methodology for parameter extraction, ensuring that the parameters both fulfill their intended roles within the BRB framework and accurately represent complex, nonlinear data relationships. A comprehensive interpretability analysis of MAS-BRB components further confirms their compliance with established BRB interpretability standards. Experiments conducted on multiple public datasets demonstrate that MAS-BRB not only achieves improved modeling performance but also shows greater effectiveness compared to existing rule-based and traditional machine learning models.
引用
收藏
页码:5163 / 5175
页数:13
相关论文
共 50 条
  • [31] A model-agnostic approach for understanding heart failure risk factors
    Seyed M. Miran
    Stuart J. Nelson
    Qing Zeng-Treitler
    BMC Research Notes, 14
  • [32] Predicting Defective Lines Using a Model-Agnostic Technique
    Wattanakriengkrai, Supatsara
    Thongtanunam, Patanamon
    Tantithamthavorn, Chakkrit
    Hata, Hideaki
    Matsumoto, Kenichi
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (05) : 1480 - 1496
  • [33] A Model-Agnostic Approach for Learning with Noisy Labels of Arbitrary Distributions
    Hao, Shuang
    Li, Peng
    Wu, Renzhi
    Chu, Xu
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1219 - 1231
  • [34] Model-Agnostic Explanations for Decisions Using Minimal Patterns
    Asano, Kohei
    Chun, Jinhee
    Koike, Atsushi
    Tokuyama, Takeshi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: THEORETICAL NEURAL COMPUTATION, PT I, 2019, 11727 : 241 - 252
  • [35] A model-agnostic approach for understanding heart failure risk factors
    Miran, Seyed M.
    Nelson, Stuart J.
    Zeng-Treitler, Qing
    BMC RESEARCH NOTES, 2021, 14 (01)
  • [36] Applying local interpretable model-agnostic explanations to identify substructures that are responsible for mutagenicity of chemical compounds
    Rosa, Lucca Caiaffa Santos
    Pimentel, Andre Silva
    MOLECULAR SYSTEMS DESIGN & ENGINEERING, 2024, 9 (09): : 920 - 936
  • [37] Enhancing Visualization and Explainability of Computer Vision Models with Local Interpretable Model-Agnostic Explanations (LIME)
    Hamilton, Nicholas
    Webb, Adam
    Wilder, Matt
    Hendrickson, Ben
    Blanck, Matt
    Nelson, Erin
    Roemer, Wiley
    Havens, Timothy C.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 604 - 611
  • [38] Predicting households' residential mobility trajectories with geographically localized interpretable model-agnostic explanation (GLIME)
    Jin, Chanwoo
    Park, Sohyun
    Ha, Hui Jeong
    Lee, Jinhyung
    Kim, Junghwan
    Hutchenreuther, Johan
    Nara, Atsushi
    INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2023, 37 (12) : 2597 - 2619
  • [39] Early knee osteoarthritis classification using distributed explainable convolutional neural network with local interpretable model-agnostic explanations
    Kumar, M. Ganesh
    Gumma, Lakshmi Narayana
    Neelam, Saikiran
    Yaswanth, Narikamalli
    Yedukondalu, Jammisetty
    Engineering Research Express, 2024, 6 (04):
  • [40] TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models
    Schlegel, Udo
    Duy Lam Vo
    Keim, Daniel A.
    Seebacher, Daniel
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 5 - 14