Exploring accuracy and interpretability trade-off in tabular learning with novel attention-based models

被引:0
|
作者
Kodjo Mawuena Amekoe [1 ]
Hanane Azzag [3 ]
Zaineb Chelly Dagdia [1 ]
Mustapha Lebbah [2 ]
Gregoire Jaffre [2 ]
机构
[1] Université Sorbonne Paris Nord,
[2] LIPN CNRS UMR,undefined
[3] Université Paris-Saclay,undefined
[4] DAVID Lab,undefined
[5] UVSQ,undefined
[6] Groupe BPCE,undefined
关键词
Tabular data; Interpretability; Attention; Robust explanation;
D O I
10.1007/s00521-024-10163-9
中图分类号
学科分类号
摘要
Apart from high accuracy, what interests many researchers and practitioners in real-life tabular learning problems (e.g., fraud detection and credit scoring) is uncovering hidden patterns in the data and/or providing meaningful justification of decisions made by machine learning models. In this concern, an important question arises: should one use inherently interpretable models or explain full-complexity models such as XGBoost, Random Forest with post hoc tools? Opting for the second choice is typically supported by the accuracy metric, but it is not always evident that the performance gap is sufficiently significant, especially considering the current trend of accurate and inherently interpretable models, as well as accounting for other real-life evaluation metrics such as faithfulness, stability, and computational cost of explanations. In this work, we show through benchmarking on 45 datasets that the relative accuracy loss is less than 4% in average when using intelligible models such as explainable boosting machine. Furthermore, we propose a simple use of model ensembling to improve the expressiveness of TabSRALinear, a novel attention-based inherently interpretable solution, and demonstrate both theoretically and empirically that it is a viable option for (1) generating stable or robust explanations and (2) incorporating human knowledge during the training phase. Source code is available at https://github.com/anselmeamekoe/TabSRA.
引用
收藏
页码:18583 / 18611
页数:28
相关论文
共 50 条
  • [1] Hybrid learning models to get the interpretability–accuracy trade-off in fuzzy modeling
    Rafael Alcalá
    Jesús Alcalá-Fdez
    Jorge Casillas
    Oscar Cordón
    Francisco Herrera
    Soft Computing, 2006, 10 : 717 - 734
  • [2] Attention-based face alignment: A solution to speed/accuracy trade-off
    Wang, Teng
    Tong, Xinjie
    Cai, Wenzhe
    NEUROCOMPUTING, 2020, 400 : 86 - 96
  • [3] Hybrid learning models to get the interpretability-accuracy trade-off in fuzzy modeling
    Alcalá, R
    Alcalá-Fdez, J
    Casillas, J
    Cordón, O
    Herrera, F
    SOFT COMPUTING, 2006, 10 (09) : 717 - 734
  • [4] Exploring the Accuracy - Energy Trade-off in Machine Learning
    Brownlee, Alexander E., I
    Adair, Jason
    Haraldsson, Saemundur O.
    Jabbo, John
    2021 IEEE/ACM INTERNATIONAL WORKSHOP ON GENETIC IMPROVEMENT (GI 2021), 2021, : 11 - 18
  • [5] Automated Machine Learning for Studying the Trade-Off Between Predictive Accuracy and Interpretability
    Freitas, Alex A.
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2019, 2019, 11713 : 48 - 66
  • [6] The performance-interpretability trade-off: a comparative study of machine learning models
    André Assis
    Jamilson Dantas
    Ermeson Andrade
    Journal of Reliable Intelligent Environments, 2025, 11 (1)
  • [7] Interpretability and accuracy trade-off in the modeling of belief rule-based systems
    You, Yaqian
    Sun, Jianbin
    Guo, Yu
    Tan, Yuejin
    Jiang, Jiang
    KNOWLEDGE-BASED SYSTEMS, 2022, 236
  • [8] Trade-off between accuracy and interpretability for predictive in silico modeling
    Johansson, Ulf
    Sonstrod, Cecilia
    Norinder, Ulf
    Bostrom, Henrik
    FUTURE MEDICINAL CHEMISTRY, 2011, 3 (06) : 647 - 663
  • [9] The accuracy versus interpretability trade-off in fraud detection model
    Nesvijevskaia, Anna
    Ouillade, Sophie
    Guilmin, Pauline
    Zucker, Jean-Daniel
    DATA & POLICY, 2021, 3
  • [10] Attention-based investigation and solution to the trade-off issue of adversarial training
    Shao, Changbin
    Li, Wenbin
    Huo, Jing
    Feng, Zhenhua
    Gao, Yang
    NEURAL NETWORKS, 2024, 174