Toward Efficient Automation of Interpretable Machine Learning

被引:0
|
作者
Kovalerchuk, Boris [1 ]
Neuhaus, Nathan [1 ]
机构
[1] Cent Washington Univ, Dept Comp Sci, Ellensburg, WA 98926 USA
关键词
machine learning; explainability; interpretability; accuracy; classifier; visualization; visual model; dominant intervals;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Developing more efficient automated methods for interpretable machine learning (ML) is an important and long-term machine-learning goal. Recent studies show that unintelligible "black" box models, such as Deep Learning Neural Networks, often outperform more interpretable "grey" or "white" box models such as Decision Trees, Bayesian networks, Logic Relational models and others. Being forced to choose between accuracy and interpretability, however, is a major obstacle in the wider adoption of ML in healthcare and other domains where decisions requires both facets. Due to human perceptual limitations in analyzing complex multidimensional relations in ML, complex ML must be "degraded" to the level of human understanding, thereby also degrading model accuracy. To address this challenge, this paper presents the Dominance Classifier and Predictor (DCP) algorithm, capable of automating the process of discovering human-understandable machine learning models that are simple and visualizable. The success of DCP is shown on the benchmark Wisconsin Breast Cancer dataset with the higher accuracy than the accuracy known for other interpretable methods on these data. Furthermore, the DCP algorithm shortens the accuracy gap between interpretable and non-interpretable models on these data. The DCP explanation includes both interpretable mathematical and visual forms. Such an approach opens a new opportunity for producing more accurate and domain-explainable ML models.
引用
收藏
页码:4940 / 4947
页数:8
相关论文
共 50 条
  • [1] Automation of an Educational Data Mining Model Applying Interpretable Machine Learning and Auto Machine Learning
    Novillo Rangone, Gabriel
    Pizarro, Carlos
    Montejano, German
    [J]. COMMUNICATION AND SMART TECHNOLOGIES (ICOMTA 2021), 2022, 259 : 22 - 30
  • [2] Toward Interpretable Machine Learning Models for Materials Discovery
    Mikulskis, Paulius
    Alexander, Morgan R.
    Winkler, David Alan
    [J]. ADVANCED INTELLIGENT SYSTEMS, 2019, 1 (08)
  • [3] Toward Interpretable Machine Learning for Understanding Epidemic Data
    Hougen, Dean Frederick
    Pei, Jin-Song
    Kanneganti, Sai Teja
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 3677 - 3681
  • [4] Interpretable Machine Learning for Catalytic Materials Design toward Sustainability
    Xin, Hongliang
    Mou, Tianyou
    Pillai, Hemanth Somarajan
    Wang, Shih-Han
    Huang, Yang
    [J]. ACCOUNTS OF MATERIALS RESEARCH, 2023, 5 (01): : 22 - 34
  • [5] Toward Design and Evaluation Framework for Interpretable Machine Learning Systems
    Mohseni, Sina
    [J]. AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 553 - 554
  • [6] Toward interpretable machine learning: evaluating models of heterogeneous predictions
    Zhang, Ruixun
    [J]. ANNALS OF OPERATIONS RESEARCH, 2024,
  • [7] Interpretable Machine Learning
    Chen V.
    Li J.
    Kim J.S.
    Plumb G.
    Talwalkar A.
    [J]. Queue, 2021, 19 (06): : 28 - 56
  • [8] Interpretable machine learning predictions for efficient perovskite solar cell development
    Hu, Jinghao
    Chen, Zhengxin
    Chen, Yuzhi
    Liu, Hongyu
    Li, Wenhao
    Wang, Yanan
    Peng, Lin
    Liu, Xiaolin
    Lin, Jia
    Chen, Xianfeng
    Wu, Jiang
    [J]. SOLAR ENERGY MATERIALS AND SOLAR CELLS, 2024, 271
  • [9] Tensor Networks for Interpretable and Efficient Quantum-Inspired Machine Learning
    Ran, Shi-Ju
    Su, Gang
    [J]. Intelligent Computing, 2023, 2
  • [10] Interpretable machine learning assessment
    Han, Henry
    Wu, Yi
    Wang, Jiacun
    Han, Ashley
    [J]. NEUROCOMPUTING, 2023, 561