NeSyFOLD: A Framework for Interpretable Image Classification

被引:0
|
作者
Padalkar, Parth [1 ]
Wang, Huaduo [1 ]
Gupta, Gopal [1 ]
机构
[1] Univ Texas Dallas, Richardson, TX 75080 USA
关键词
RULES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models such as CNNs have surpassed human performance in computer vision tasks such as image classification. However, despite their sophistication, these models lack interpretability which can lead to biased outcomes reflecting existing prejudices in the data. We aim to make predictions made by a CNN interpretable. Hence, we present a novel framework called NeSyFOLD to create a neurosymbolic (NeSy) model for image classification tasks. The model is a CNN with all layers following the last convolutional layer replaced by a stratified answer set program (ASP) derived from the last layer kernels. The answer set program can be viewed as a rule-set, wherein the truth value of each predicate depends on the activation of the corresponding kernel in the CNN. The rule-set serves as a global explanation for the model and is interpretable. We also use our NeSyFOLD framework with a CNN that is trained using a sparse kernel learning technique called Elite BackProp (EBP). This leads to a significant reduction in rule-set size without compromising accuracy or fidelity thus improving scalability of the NeSy model and interpretability of its rule-set. Evaluation is done on datasets with varied complexity and sizes. We also propose a novel algorithm for labeling the predicates in the ruleset with meaningful semantic concept(s) learnt by the CNN. We evaluate the performance of our "semantic labeling algorithm" to quantify the efficacy of the semantic labeling for both the NeSy model and the NeSy-EBP model.
引用
收藏
页码:4378 / 4387
页数:10
相关论文
共 50 条
  • [1] MetaCluster: A Universal Interpretable Classification Framework for Cybersecurity
    Ge, Wenhan
    Cui, Zeyuan
    Wang, Junfeng
    Tang, Binhui
    Li, Xiaohui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3829 - 3843
  • [2] Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification
    Bennetot, Adrien
    Franchi, Gianni
    Del Ser, Javier
    Chatila, Raja
    Diaz-Rodriguez, Natalia
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [3] An Adaptive and Interpretable Framework for Biomedical Image Analysis
    Singh, Samarth
    Acton, Scott T.
    Moosa, Shayan
    Sheybani, Natasha D.
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 1156 - 1160
  • [4] Interpretable Image Classification with Differentiable Prototypes Assignment
    Rymarczyk, Dawid
    Struski, Lukasz
    Gorszczak, Michal
    Lewandowska, Koryna
    Tabor, Jacek
    Zielinski, Bartosz
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 351 - 368
  • [5] Comparative Study of Interpretable Image Classification Models
    Bajcsi, Adel
    Bajcsi, Anna
    Pavel, Szabolcs
    Portik, Abel
    Sandor, Csanad
    Szenkovits, Annamaria
    Vas, Orsolya
    Bodo, Zalan
    Csato, Lehel
    INFOCOMMUNICATIONS JOURNAL, 2023, 15 : 20 - 26
  • [6] INTERPRETABLE AESTHETIC FEATURES FOR AFFECTIVE IMAGE CLASSIFICATION
    Wang, Xiaohui
    Jia, Jia
    Yin, Jiaming
    Cai, Lianhong
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 3230 - 3234
  • [7] A framework for image classification
    Awad, Mamoun
    Wang, Lei
    Chin, Yuhan
    Khan, Latifur
    Chen, George
    Chebil, Fehmi
    7TH IEEE SOUTHWEST SYMPOSIUM ON IMAGE ANALYSIS AND INTERPRETATION, 2006, : 134 - +
  • [8] MultiCapsNet: A General Framework for Data Integration and Interpretable Classification
    Wang, Lifei
    Miao, Xuexia
    Nie, Rui
    Zhang, Zhang
    Zhang, Jiang
    Cai, Jun
    FRONTIERS IN GENETICS, 2021, 12
  • [10] A Bayesian Framework for Learning Rule Sets for Interpretable Classification
    Wang, Tong
    Rudin, Cynthia
    Doshi-Velez, Finale
    Liu, Yimin
    Klampfl, Erica
    MacNeille, Perry
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18 : 1 - 37