A Meta Algorithm for Interpretable Ensemble Learning: The League of Experts

被引:0
|
作者
Vogel, Richard [1 ]
Schlosser, Tobias [2 ]
Manthey, Robert [1 ]
Ritter, Marc [1 ]
Vodel, Matthias [1 ]
Eibl, Maximilian [3 ]
Schneider, Kristan Alexander [4 ]
机构
[1] Univ Appl Sci Mittweida, Media Informat, D-09648 Mittweida, Germany
[2] Tech Univ Chemnitz, Media Comp, D-09107 Chemnitz, Germany
[3] Tech Univ Chemnitz, Media Informat, D-09107 Chemnitz, Germany
[4] Univ Appl Sci Mittweida, Modeling & Simulat, D-09648 Mittweida, Germany
来源
关键词
ensemble learning; multiagent systems; explainability; glass box models; BLACK-BOX; DECISION;
D O I
10.3390/make6020038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Background. The importance of explainable artificial intelligence and machine learning (XAI/XML) is increasingly being recognized, aiming to understand how information contributes to decisions, the method's bias, or sensitivity to data pathologies. Efforts are often directed to post hoc explanations of black box models. These approaches add additional sources for errors without resolving their shortcomings. Less effort is directed into the design of intrinsically interpretable approaches. Methods. We introduce an intrinsically interpretable methodology motivated by ensemble learning: the League of Experts (LoE) model. We establish the theoretical framework first and then deduce a modular meta algorithm. In our description, we focus primarily on classification problems. However, LoE applies equally to regression problems. Specific to classification problems, we employ classical decision trees as classifier ensembles as a particular instance. This choice facilitates the derivation of human-understandable decision rules for the underlying classification problem, which results in a derived rule learning system denoted as RuleLoE. Results. In addition to 12 KEEL classification datasets, we employ two standard datasets from particularly relevant domains-medicine and finance-to illustrate the LoE algorithm. The performance of LoE with respect to its accuracy and rule coverage is comparable to common state-of-the-art classification methods. Moreover, LoE delivers a clearly understandable set of decision rules with adjustable complexity, describing the classification problem. Conclusions. LoE is a reliable method for classification and regression problems with an accuracy that seems to be appropriate for situations in which underlying causalities are in the center of interest rather than just accurate predictions or classifications.
引用
收藏
页码:800 / 826
页数:27
相关论文
共 50 条
  • [1] ILGBMSH: an interpretable classification model for the shRNA target prediction with ensemble learning algorithm
    Zhao, Chengkui
    Xu, Nan
    Tan, Jingwen
    Cheng, Qi
    Xie, Weixin
    Xu, Jiayu
    Wei, Zhenyu
    Ye, Jing
    Yu, Lei
    Feng, Weixing
    [J]. BRIEFINGS IN BIOINFORMATICS, 2022, 23 (06)
  • [2] Learning Locally Interpretable Rule Ensemble
    Kanamori, Kentaro
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT III, 2023, 14171 : 360 - 377
  • [3] An interpretable ensemble method for deep representation learning
    Jiang, Kai
    Xiong, Zheli
    Yang, Qichong
    Chen, Jianpeng
    Chen, Gang
    [J]. ENGINEERING REPORTS, 2024, 6 (03)
  • [4] Boosted mixture of experts: An ensemble learning scheme
    Avnimelech, R
    Intrator, N
    [J]. NEURAL COMPUTATION, 1999, 11 (02) : 483 - 497
  • [5] Continuous Action Reinforcement Learning From a Mixture of Interpretable Experts
    Akrour, Riad
    Tateo, Davide
    Peters, Jan
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6795 - 6806
  • [6] Credit scoring prediction leveraging interpretable ensemble learning
    Liu, Yang
    Huang, Fei
    Ma, Lili
    Zeng, Qingguo
    Shi, Jiale
    [J]. JOURNAL OF FORECASTING, 2024, 43 (02) : 286 - 308
  • [7] Interpretable machine learning with an ensemble of gradient boosting machines
    Konstantinov, Andrei, V
    Utkin, Lev, V
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 222
  • [8] An Experts Algorithm for Transfer Learning
    Talvitie, Erik
    Singh, Satinder
    [J]. 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 1065 - 1070
  • [9] Prediction of Cavity Length Using an Interpretable Ensemble Learning Approach
    Guo, Ganggui
    Li, Shanshan
    Liu, Yakun
    Cao, Ze
    Deng, Yangyu
    [J]. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2023, 20 (01)
  • [10] An interpretable ensemble learning method to predict the compressive strength of concrete
    Jia, Jun-Feng
    Chen, Xi-Ze
    Bai, Yu-Lei
    Li, Yu-Long
    Wang, Zhi-Hao
    [J]. STRUCTURES, 2022, 46 : 201 - 213