Evaluating Quality of Visual Explanations of Deep Learning Models for Vision Tasks

被引:0
|
作者
Yang, Yuqing [1 ,2 ]
Mahmoudpour, Saeed [1 ,2 ]
Schelkens, Peter [1 ,2 ]
Deligiannis, Nikos [1 ,2 ]
机构
[1] Vrije Univ Brussel, Dept Elect & Informat, Pl Laan 2, B-1050 Brussels, Belgium
[2] imec, Kapeldreef 75, B-3001 Leuven, Belgium
关键词
Explainable artificial intelligence; Vision Transformer; heatmaps; subjective evaluation;
D O I
10.1109/QOMEX58391.2023.10178510
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Explainable artificial intelligence (XAI) has gained considerable attention in recent years as it aims to help humans better understand machine learning decisions, making complex black-box systems more trustworthy. Visual explanation algorithms have been designed to generate heatmaps highlighting image regions that a deep neural network focuses on to make decisions. While convolutional neural network (CNN) models typically follow similar processing operations for feature encoding, the emergence of vision transformer (ViT) has introduced a new approach to machine vision decision-making. Therefore, an important question is which architecture provides more human-understandable explanations. This paper examines the explainability of deep architectures, including CNN and ViT models under different vision tasks. To this end, we first performed a subjective experiment asking humans to highlight the key visual features in images that helped them to make decisions in two different vision tasks. Next, using the human-annotated images, ground-truth heatmaps were generated that were compared against heatmaps generated by explanation methods for the deep architectures. Moreover, perturbation tests were performed for objective evaluation of the deep models' explanation heatmaps. According to the results, the explanations generated from ViT are deemed more trustworthy than those produced by other CNNs, and as the features of the input image are more dispersed, the advantage of the model becomes more evident.
引用
收藏
页码:159 / 164
页数:6
相关论文
共 50 条
  • [41] HIVE: Evaluating the Human Interpretability of Visual Explanations
    Kim, Sunnie S. Y.
    Meister, Nicole
    Ramaswamy, Vikram V.
    Fong, Ruth
    Russakovsky, Olga
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 280 - 298
  • [42] Explaining Deep Learning Models for Low Vision Prognosis
    Gui, Haiwen
    Tseng, Benjamin
    Hu, Wendeng
    Wang, Sophia Y.
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [43] Evaluating Vision-Language Models in Visual Comprehension for Autonomous Driving
    Zhou, Shanmin
    Li, Jialong
    Yamauchi, Takuto
    Cai, Jinyu
    Tei, Kenji
    2024 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE, SEAI 2024, 2024, : 205 - 209
  • [44] Active Vision for Deep Visual Learning: A Unified Pooling Framework
    Guo, Nan
    Gu, Ke
    Qiao, Junfei
    Liu, Hantao
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6610 - 6618
  • [45] Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language Tasks
    Sammani, Fawaz
    Deligiannis, Nikos
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 4636 - 4641
  • [46] Improving and evaluating deep learning models of cellular organization
    Sun, Huangqingbo
    Fu, Xuecong
    Abraham, Serena
    Shen, Jin
    Murphy, Robert F.
    BIOINFORMATICS, 2022, 38 (23) : 5299 - 5306
  • [47] Research progress of computer vision tasks based on deep learning and SAE network
    Ling, Shijia
    Yi, Qiaoling
    Lan, Banru
    Liu, Liangfang
    APPLIED MATHEMATICS AND NONLINEAR SCIENCES, 2023, 8 (02) : 985 - 994
  • [48] Creating visual explanations improves learning
    Bobek E.
    Tversky B.
    Cognitive Research: Principles and Implications, 1 (1)
  • [49] Evaluating the Quality of Serial EM Sections with Deep Learning
    Tavakoli, Mahsa Bank
    Morgan, Josh L.
    MICROSCOPY AND MICROANALYSIS, 2024, : 501 - 507
  • [50] A Deep Learning Algorithm for Evaluating the Quality of English Teaching
    Li, Nan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS, 2023, 22 (03)