Learning outside the Black-Box: The pursuit of interpretable models

被引:0
|
作者
Crabbe, Jonathan [1 ]
Zhang, Yao [1 ]
Zame, William R. [2 ]
van der Schaar, Mihaela [1 ]
机构
[1] Univ Cambridge, Cambridge CB2 1TN, England
[2] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
关键词
REPRESENTATION; REGRESSION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine Learning has proved its ability to produce accurate models - but the deployment of these models outside the machine learning community has been hindered by the difficulties of interpreting these models. This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function. Our algorithm employs a variation of projection pursuit in which the ridge functions are chosen to be Meijer G-functions, rather than the usual polynomial splines. Because Meijer G-functions are differentiable in their parameters, we can "tune" the parameters of the representation by gradient descent; as a consequence, our algorithm is efficient. Using five familiar data sets from the UCI repository and two familiar machine learning algorithms, we demonstrate that our algorithm produces global interpretations that are both highly accurate and parsimonious (involve a small number of terms). Our interpretations permit easy understanding of the relative importance of features and feature interactions. Our interpretation algorithm represents a leap forward from the previous state of the art.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Black-Box Model Explained Through an Assessment of Its Interpretable Features
    Ventura, Francesco
    Cerquitelli, Tania
    Giacalone, Francesco
    NEW TRENDS IN DATABASES AND INFORMATION SYSTEMS, ADBIS 2018, 2018, 909 : 138 - 149
  • [32] Interpretable, not black-box, artificial intelligence should be used for embryo selection
    Afnan, Michael Anis Mihdi
    Liu, Yanhe
    Conitzer, Vincent
    Rudin, Cynthia
    Mishra, Abhishek
    Savulescu, Julian
    Afnan, Masoud
    HUMAN REPRODUCTION OPEN, 2021, 2021 (04)
  • [33] Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models
    Kuk, Michal
    Bobek, Szymon
    Nalepa, Grzegorz J.
    COMPUTATIONAL SCIENCE - ICCS 2022, PT III, 2022, 13352 : 668 - 675
  • [34] OneMax in Black-Box Models with Several Restrictions
    Carola Doerr
    Johannes Lengler
    Algorithmica, 2017, 78 : 610 - 640
  • [35] ONEMAX in Black-Box Models with Several Restrictions
    Doerr, Carola
    Lengler, Johannes
    ALGORITHMICA, 2017, 78 (02) : 610 - 640
  • [36] Testing Framework for Black-box AI Models
    Aggarwal, Aniya
    Shaikh, Samiulla
    Hans, Sandeep
    Haldar, Swastik
    Ananthanarayanan, Rema
    Saha, Diptikalyan
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2021), 2021, : 81 - 84
  • [37] Auditing black-box models for indirect influence
    Adler, Philip
    Falk, Casey
    Friedler, Sorelle A.
    Nix, Tionney
    Rybeck, Gabriel
    Scheidegger, Carlos
    Smith, Brandon
    Venkatasubramanian, Suresh
    KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) : 95 - 122
  • [38] Demystifying Black-box Models with Symbolic Metamodels
    Alaa, Ahmed M.
    van der Schaar, Mihaela
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [39] Auditing Black-box Models for Indirect Influence
    Adler, Philip
    Falk, Casey
    Friedler, Sorelle A.
    Rybeck, Gabriel
    Scheidegger, Carlos
    Smith, Brandon
    Venkatasubramanian, Suresh
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 1 - 10
  • [40] BLACK-BOX MODELS FOR LINEAR INTEGRATED CIRCUITS
    MURRAYLA.MA
    IEEE TRANSACTIONS ON EDUCATION, 1969, E 12 (03) : 170 - &