Machine Learning Explainability Through Comprehensible Decision Trees

被引:31
|
作者
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
机构
[1] Univ Rovira & Virgili, CYBERCAT Ctr Cybersecur Res Catalonia, Dept Comp Sci & Math, UNESCO Chair Data Privacy, Av Paisos Catalans 26, Tarragona 43007, Catalonia, Spain
基金
欧盟地平线“2020”;
关键词
Explainability; Machine learning; Data protection; Microaggregation; Privacy;
D O I
10.1007/978-3-030-29726-8_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation establishes that citizens have the right to receive an explanation on automated decisions affecting them. For explainability to be scalable, it should be possible to derive explanations in an automated way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited size as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between comprehensibility and representativeness of the surrogate model on the one side and privacy of the subjects used for training the black-box model on the other side.
引用
收藏
页码:15 / 26
页数:12
相关论文
共 50 条
  • [1] Machine learning explainability via microaggregation and shallow decision trees
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Martinez, Sergio
    Sanchez, David
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 194
  • [2] Generation of comprehensible decision trees through evolution of training data
    Endou, T
    Zhao, QF
    [J]. CEC'02: PROCEEDINGS OF THE 2002 CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1 AND 2, 2002, : 1221 - 1225
  • [3] Fast Hybrid Oracle-Explainer Approach to Explainability Using Optimized Search of Comprehensible Decision Trees
    Szczepanski, Mateusz
    Pawlicki, Marek
    Kozik, Rafal
    Choras, Michal
    [J]. 2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 907 - 916
  • [4] ESG ratings explainability through machine learning techniques
    Del Vitto, Alessandro
    Marazzina, Daniele
    Stocco, Davide
    [J]. ANNALS OF OPERATIONS RESEARCH, 2023,
  • [5] Seeing the forest through the trees: Learning a comprehensible model from an ensemble
    Van Assche, Anneleen
    Blockeel, Hendrik
    [J]. MACHINE LEARNING: ECML 2007, PROCEEDINGS, 2007, 4701 : 418 - +
  • [6] Learning comprehensible and accurate hybrid trees
    Piltaver, Rok
    Lustrek, Mitja
    Dzeroski, Saso
    Gjoreski, Martin
    Gams, Matjaz
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 164
  • [7] Mixture of Decision Trees for Interpretable Machine Learning
    Brueggenjuergen, Simeon
    Schaaf, Nina
    Kerschke, Pascal
    Huber, Marco F.
    [J]. 2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1175 - 1182
  • [8] Seeing the forest through the trees - Learning a comprehensible model from a first order ensemble
    Van Assche, Anneleen
    Blockeel, Hendrik
    [J]. INDUCTIVE LOGIC PROGRAMMING, 2008, 4894 : 269 - 279
  • [9] Learning Decision Trees Recurrently Through Communication
    Alaniz, Stephan
    Marcos, Diego
    Schiele, Bernt
    Akata, Zeynep
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13513 - 13522
  • [10] Legal requirements on explainability in machine learning
    Adrien Bibal
    Michael Lognoul
    Alexandre de Streel
    Benoît Frénay
    [J]. Artificial Intelligence and Law, 2021, 29 : 149 - 169