Can metafeatures help improve explanations of prediction models when using behavioral and textual data?

被引:6
|
作者
Ramon, Yanou [1 ]
Martens, David [1 ]
Evgeniou, Theodoros [2 ]
Praet, Stiene [1 ]
机构
[1] Univ Antwerp, Dept Engn, Antwerp, Belgium
[2] INSEAD, Decis Sci & Technol Management, Fontainebleau, France
基金
比利时弗兰德研究基金会;
关键词
Explainable artificial intelligence; Interpretable machine learning; Metafeatures; Comprehensibility; Global explanations; Rule-extraction; Classification; Big behavioral data; Textual data; SUPPORT VECTOR MACHINES; RULE EXTRACTION; CHURN PREDICTION; TASK COMPLEXITY; CLASSIFICATION; SPARSE; BIG;
D O I
10.1007/s10994-021-05981-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models built on behavioral and textual data can result in highly accurate prediction models, but are often very difficult to interpret. Linear models require investigating thousands of coefficients, while the opaqueness of nonlinear models makes things worse. Rule-extraction techniques have been proposed to combine the desired predictive accuracy of complex "black-box" models with global explainability. However, rule-extraction in the context of high-dimensional, sparse data, where many features are relevant to the predictions, can be challenging, as replacing the black-box model by many rules leaves the user again with an incomprehensible explanation. To address this problem, we develop and test a rule-extraction methodology based on higher-level, less-sparse "metafeatures". We empirically validate the quality of the explanation rules in terms of fidelity, stability, and accuracy over a collection of data sets, and benchmark their performance against rules extracted using the fine-grained behavioral and textual features. A key finding of our analysis is that metafeatures-based explanations are better at mimicking the behavior of the black-box prediction model, as measured by the fidelity of explanations.
引用
收藏
页码:4245 / 4284
页数:40
相关论文
共 50 条
  • [1] Post Hoc Explanations of Language Models Can Improve Language Models
    Krishna, Satyapriya
    Ma, Jiaqi
    Slack, Dylan
    Ghandeharioun, Asma
    Singh, Sameer
    Lakkaraju, Himabindu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data
    Hase, Peter
    Bansal, Mohit
    [J]. PROCEEDINGS OF THE FIRST WORKSHOP ON LEARNING WITH NATURAL LANGUAGE SUPERVISION (LNLS 2022), 2022, : 29 - 39
  • [3] RESEARCH - CAN BEHAVIORAL SCIENTISTS HELP MANAGERS IMPROVE THEIR ORGANIZATIONS
    CHERNS, AB
    [J]. ORGANIZATIONAL DYNAMICS, 1973, 1 (03) : 51 - 67
  • [4] Can Agrometeorological Indices of Adverse Weather Conditions Help to Improve Yield Prediction by Crop Models?
    Lalic, Branislava
    Eitzinger, Josef
    Thaler, Sabina
    Vucetic, Visnjica
    Nejedlik, Pavol
    Eckersten, Henrik
    Jacimovic, Goran
    Nikolic-Djoric, Emilija
    [J]. ATMOSPHERE, 2014, 5 (04): : 1020 - 1041
  • [5] HOW CAN THE SYNERGY OF DATA MINING AND AI IMPROVE GLUCOSE PREDICTION MODELS?
    Fontanellaz, M.
    Sun, Q.
    Jankovic, M.
    Mougiakakou, S.
    [J]. DIABETES TECHNOLOGY & THERAPEUTICS, 2021, 23 : A80 - A80
  • [6] Savings: When data is doubted can theory help?
    Goyal, A
    [J]. ECONOMIC AND POLITICAL WEEKLY, 1996, 31 (21) : 1257 - 1264
  • [7] Can fractional calculus help improve tumor growth models?
    Valentim Jr, Carlos A.
    Oliveira, Naila A.
    Rabi, Jose A.
    David, Sergio A.
    [J]. JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2020, 379
  • [8] When can unlabeled data improve the learning rate?
    Goepfert, Christina
    Ben-David, Shai
    Bousquet, Olivier
    Gelly, Sylvain
    Tolstikhin, Ilya
    Urner, Ruth
    [J]. CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [9] Can multiple social ties help improve human location prediction?
    Li, Cong
    Zhang, Shumin
    Li, Xiang
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2019, 525 : 1276 - 1288
  • [10] Interpreting deep learning models for glioma survival classification using visualization and textual explanations
    Michael Osadebey
    Qinghui Liu
    Elies Fuster-Garcia
    Kyrre E. Emblem
    [J]. BMC Medical Informatics and Decision Making, 23