GrMC: Towards Interpretable Classification Models That Are Also Accurate

被引:0
|
作者
Dong, Guozhu [1 ]
Skapura, Nicholas [1 ]
机构
[1] Wright State Univ, Dayton, OH 45435 USA
关键词
classification model; model type; interpretability; accuracy; instance group; group definition; group model; small; committee of group models; heterogeneity;
D O I
10.1109/ICKG59574.2023.00031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interpretability of classification models is an important issue. However, there is a lack of intrinsically interpretable models that are also highly accurate. To fill the gap, we introduce a new classification model type, namely Group Model Committee (GrMC), and an associated learning algorithm. Our key ideas are: (1) Divide a classification task's data space into several groups such that each group is defined by a simple condition and it has a unique, interpretable group model; (2) A data instance belongs to a group if it satisfies the group's defining condition and it is classified by utilizing the group models of the groups it belongs to. Experiments show that small interpretable GrMC models are often more accurate than existing intrinsically interpretable models, and also more accurate than Random Forests models. GrMC also has other strengths.
引用
收藏
页码:209 / 218
页数:10
相关论文
共 50 条
  • [31] Towards More Accurate Uncertainty Estimation In Text Classification
    He, Jianfeng
    Zhang, Xuchao
    Lei, Shuo
    Chen, Zhiqian
    Chen, Fanglan
    Alhamadani, Abdulaziz
    Xiao, Bei
    Lu, Chang-Tien
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8362 - 8372
  • [32] Building of Robust and Interpretable QSAR Classification Models by Means of the Rivality Index
    Luque Ruiz, Irene
    Angel Gomez-Nieto, Miguel
    [J]. JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2019, 59 (06) : 2785 - 2804
  • [33] Towards an Interpretable AI Framework for Advanced Classification of Unmanned Aerial Vehicles (UAVs)
    Haque, Ekramul
    Hasan, Kamrul
    Ahmed, Imtiaz
    Alam, Md. Sahabul
    Islam, Tariqul
    [J]. 2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 644 - 645
  • [34] DocXclassifier: towards a robust and interpretable deep neural network for document image classification
    Saifullah, Saifullah
    Agne, Stefan
    Dengel, Andreas
    Ahmed, Sheraz
    [J]. INTERNATIONAL JOURNAL ON DOCUMENT ANALYSIS AND RECOGNITION, 2024, 27 (03) : 447 - 473
  • [35] Stories behind decisions: Towards interpretable malware family classification with hierarchical attention
    Bao, Huaifeng
    Li, Wenhao
    Chen, Huashan
    Miao, Han
    Wang, Qiang
    Tang, Zixian
    Liu, Feng
    Wang, Wen
    [J]. Computers and Security, 2024, 144
  • [36] Towards Interpretable Arrhythmia Classification With Human-Machine Collaborative Knowledge Representation
    Wang, Jilong
    Li, Rui
    Li, Renfa
    Fu, Bin
    Xiao, Chunxia
    Chen, Danny Z.
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2021, 68 (07) : 2098 - 2109
  • [37] Towards reusable models in traffic classification
    Luxemburk, Jan
    Hynek, Karel
    [J]. PROCEEDINGS OF THE 8TH NETWORK TRAFFIC MEASUREMENT AND ANALYSIS CONFERENCE, TMA 2024, 2024,
  • [38] Pilot Selection in the Era of Virtual Reality: Algorithms for Accurate and Interpretable Machine Learning Models
    Ke, Luoma
    Zhang, Guangpeng
    He, Jibo
    Li, Yajing
    Li, Yan
    Liu, Xufeng
    Fang, Peng
    [J]. AEROSPACE, 2023, 10 (05)
  • [39] Treating Semiempirical Hamiltonians as Flexible Machine Learning Models Yields Accurate and Interpretable Results
    Hu, Frank
    He, Francis
    Yaron, David J.
    [J]. JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2023, 19 (18) : 6185 - 6196
  • [40] Enhancing skin lesion classification with advanced deep learning ensemble models: a path towards accurate medical diagnostics
    Selvaraj, Kavitha Munuswamy
    Gnanagurusubbiah, Sumathy
    Roy, Reena Roy Roby
    Peter, Jasmine Hephzipah John
    Balu, Sarala
    [J]. CURRENT PROBLEMS IN CANCER, 2024, 49