NeSyFOLD: A Framework for Interpretable Image Classification

被引:0
|
作者
Padalkar, Parth [1 ]
Wang, Huaduo [1 ]
Gupta, Gopal [1 ]
机构
[1] Univ Texas Dallas, Richardson, TX 75080 USA
关键词
RULES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models such as CNNs have surpassed human performance in computer vision tasks such as image classification. However, despite their sophistication, these models lack interpretability which can lead to biased outcomes reflecting existing prejudices in the data. We aim to make predictions made by a CNN interpretable. Hence, we present a novel framework called NeSyFOLD to create a neurosymbolic (NeSy) model for image classification tasks. The model is a CNN with all layers following the last convolutional layer replaced by a stratified answer set program (ASP) derived from the last layer kernels. The answer set program can be viewed as a rule-set, wherein the truth value of each predicate depends on the activation of the corresponding kernel in the CNN. The rule-set serves as a global explanation for the model and is interpretable. We also use our NeSyFOLD framework with a CNN that is trained using a sparse kernel learning technique called Elite BackProp (EBP). This leads to a significant reduction in rule-set size without compromising accuracy or fidelity thus improving scalability of the NeSy model and interpretability of its rule-set. Evaluation is done on datasets with varied complexity and sizes. We also propose a novel algorithm for labeling the predicates in the ruleset with meaningful semantic concept(s) learnt by the CNN. We evaluate the performance of our "semantic labeling algorithm" to quantify the efficacy of the semantic labeling for both the NeSy model and the NeSy-EBP model.
引用
收藏
页码:4378 / 4387
页数:10
相关论文
共 50 条
  • [21] An Interpretable Classification Framework for Information Extraction from Online Healthcare Forums
    Gao, Jun
    Liu, Ninghao
    Lawley, Mark
    Hu, Xia
    JOURNAL OF HEALTHCARE ENGINEERING, 2017, 2017
  • [22] MLIC: A MaxSAT-Based Framework for Learning Interpretable Classification Rules
    Malioutov, Dmitry
    Meel, Kuldeep S.
    PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING, 2018, 11008 : 312 - 327
  • [23] An Interpretable Human-in-the-Loop Process to Improve Medical Image Classification
    Santos, Joana Cristo
    Santos, Miriam Seoane
    Abreu, Pedro Henriques
    ADVANCES IN INTELLIGENT DATA ANALYSIS XXII, PT I, IDA 2024, 2024, 14641 : 179 - 190
  • [24] Interpretable Medical Image Classification Using Prototype Learning and Privileged Information
    Gallee, Luisa
    Beer, Meinrad
    Goetz, Michael
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II, 2023, 14221 : 435 - 445
  • [25] Enhanced TabNet: Attentive Interpretable Tabular Learning for Hyperspectral Image Classification
    Shah, Chiranjibi
    Du, Qian
    Xu, Yan
    REMOTE SENSING, 2022, 14 (03)
  • [26] ProtoPShare: Prototypical Parts Sharing for Similarity Discovery in Interpretable Image Classification
    Rymarczyk, Dawid
    Struski, Lukasz
    Tabor, Jacek
    Zielinski, Bartosz
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1420 - 1430
  • [27] An Interpretable Image Denoising Framework via Dual Disentangled Representation Learning
    Liang, Yunji
    Fan, Jiayuan
    Zheng, Xiaolong
    Wang, Yutong
    Huangfu, Luwen
    Ghavate, Vedant
    Yu, Zhiwen
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 2016 - 2030
  • [28] Interpretable Convolutional Neural Network Including Attribute Estimation for Image Classification
    Horii, Kazaha
    Maeda, Keisuke
    Ogawa, Takahiro
    Haseyama, Miki
    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS, 2020, 8 (02): : 111 - 124
  • [29] Interpretable Deep Image Classification Using Rationally Inattentive Utility Maximization
    Pattanayak, Kunal
    Krishnamurthy, Vikram
    Jain, Adit
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (02) : 168 - 183
  • [30] A self-interpretable module for deep image classification on small data
    Biagio La Rosa
    Roberto Capobianco
    Daniele Nardi
    Applied Intelligence, 2023, 53 : 9115 - 9147