Machine learning explainability via microaggregation and shallow decision trees

被引:37
|
作者
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
Martinez, Sergio [1 ]
Sanchez, David [1 ]
机构
[1] Univ Rovira & Virgili, Dept Comp Engn & Math, CYBERCAT Ctr Cybersecur Res Catalonia, UNESCO Chair Data Privacy, Av Paisos Catalans 26, Tarragona 43007, Catalonia, Spain
基金
欧盟地平线“2020”;
关键词
Explainability; Machine learning; Data protection; Microaggregation; Privacy;
D O I
10.1016/j.knosys.2020.105532
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence (AI) is being deployed in missions that are increasingly critical for human life. To build trust in AI and avoid an algorithm-based authoritarian society, automated decisions should be explainable. This is not only a right of citizens, enshrined for example in the European General Data Protection Regulation, but a desirable goal for engineers, who want to know whether the decision algorithms are capturing the relevant features. For explainability to be scalable, it should be possible to derive explanations in a systematic way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited depth as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between the comprehensibility and the representativeness of the surrogate model on the one side and the privacy of the subjects used for training the black-box model on the other side. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Machine Learning Explainability Through Comprehensible Decision Trees
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2019, 2019, 11713 : 15 - 26
  • [2] Mixture of Decision Trees for Interpretable Machine Learning
    Brueggenjuergen, Simeon
    Schaaf, Nina
    Kerschke, Pascal
    Huber, Marco F.
    [J]. 2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1175 - 1182
  • [3] Legal requirements on explainability in machine learning
    Adrien Bibal
    Michael Lognoul
    Alexandre de Streel
    Benoît Frénay
    [J]. Artificial Intelligence and Law, 2021, 29 : 149 - 169
  • [4] Legal requirements on explainability in machine learning
    Bibal, Adrien
    Lognoul, Michael
    de Streel, Alexandre
    Frenay, Benoit
    [J]. ARTIFICIAL INTELLIGENCE AND LAW, 2021, 29 (02) : 149 - 169
  • [5] A Survey on the Explainability of Supervised Machine Learning
    Burkart, Nadia
    Huber, Marco F.
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 70 : 245 - 317
  • [6] Learning to see the wood for the trees: machine learning, decision trees, and the classification of isolated theropod teeth
    Wills, Simon
    Underwood, Charlie J.
    Barrett, Paul M.
    [J]. PALAEONTOLOGY, 2021, 64 (01) : 75 - 99
  • [7] Utility-Embraced Microaggregation for Machine Learning Applications
    Lee, Soobin
    Shin, Won-Yong
    [J]. IEEE Access, 2022, 10 : 64535 - 64546
  • [8] Machine Learning-Based Fast Intra Mode Decision for HEVC Screen Content Coding via Decision Trees
    Kuang, Wei
    Chan, Yui-Lam
    Tsang, Sik-Ho
    Siu, Wan-Chi
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (05) : 1481 - 1496
  • [9] Utility-Embraced Microaggregation for Machine Learning Applications
    Lee, Soobin
    Shin, Won-Yong
    [J]. IEEE ACCESS, 2022, 10 : 64535 - 64546
  • [10] A Study of Measurement Modeling of Decision Trees in Machine Learning Processes
    Li, Guo
    Qin, Yi
    Wang, Minghua
    [J]. Applied Mathematics and Nonlinear Sciences, 2024, 9 (01)