Leveraging saliency priors and explanations for enhanced consistent interpretability

被引:2
|
作者
Dong, Liang
Chen, Leiyang
Fu, Zhongwang
Zheng, Chengliang
Cui, Xiaohui [1 ]
Shen, Zhidong
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430000, Peoples R China
关键词
Explainable artificial intelligence; Consistent explanation; Salient object detection; Contrastive learning; Image classification;
D O I
10.1016/j.eswa.2024.123518
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have emerged as highly effective tools for computer vision systems, showcasing remarkable performance. However, the intrinsic opacity, potential biases, and vulnerability to shortcut learning in these models pose significant concerns regarding their practical application. To tackle these issues, this work employs saliency prior and explanations to enhance the credibility, reliability, and interpretability of neural networks. Specifically, we employ a salient object detection algorithm to extract human -consistent priors from images for data augmentation. The identified saliency priors, along with explanations, serve as supervision signals directing the network's focus to salient regions within the image. Additionally, contrastive self -supervised learning is incorporated to enable the model to discern the most discriminative concepts. Experimental results confirm the algorithm's capability to align model explanations with human priors, thereby improving interpretability. Moreover, the proposed approach enhances model performance in datalimited and fine-grained classification scenarios. Importantly, our algorithm is label -independent, allowing for the integration of unlabeled data during training. In practice, this method contributes to improving the reliability and interpretability of intelligent models for downstream tasks. Our code is available here: https://github.com/DLAIResearch/SGC.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Multimodal region-consistent saliency based on foreground and background priors for indoor scene
    Zhang, J.
    Wang, Q.
    Zhao, Y.
    Chen, S. Y.
    JOURNAL OF MODERN OPTICS, 2016, 63 (17) : 1639 - 1651
  • [2] Contrastive Explanations for Model Interpretability
    Jacovi, Alon
    Swayamdipta, Swabha
    Ravfogel, Shauli
    Elazar, Yanai
    Choi, Yejin
    Goldberg, Yoav
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1597 - 1611
  • [3] Visual Saliency with Statistical Priors
    Jia Li
    Yonghong Tian
    Tiejun Huang
    International Journal of Computer Vision, 2014, 107 : 239 - 253
  • [4] Visual Saliency with Statistical Priors
    Li, Jia
    Tian, Yonghong
    Huang, Tiejun
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2014, 107 (03) : 239 - 253
  • [5] Geodesic Saliency Using Background Priors
    Wei, Yichen
    Wen, Fang
    Zhu, Wangjiang
    Sun, Jian
    COMPUTER VISION - ECCV 2012, PT III, 2012, 7574 : 29 - 42
  • [6] Learning Visual Saliency with Statistical Priors
    Deshpande, Gauri
    Chapaneri, Santosh
    Jayaswal, Deepak
    PROCEEDINGS OF 2017 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND CONTROL (ISCO 2017), 2017, : 82 - 87
  • [7] Visual analytics for process monitoring: Leveraging time-series imaging for enhanced interpretability
    Yousef, Ibrahim
    Tulsyan, Aditya
    Shah, Sirish L.
    Gopaluni, R. Bhushan
    JOURNAL OF PROCESS CONTROL, 2023, 132
  • [8] HIVE: Evaluating the Human Interpretability of Visual Explanations
    Kim, Sunnie S. Y.
    Meister, Nicole
    Ramaswamy, Vikram V.
    Fong, Ruth
    Russakovsky, Olga
    COMPUTER VISION, ECCV 2022, PT XII, 2022, 13672 : 280 - 298
  • [9] Leveraging Stereopsis for Saliency Analysis
    Niu, Yuzhen
    Geng, Yujie
    Li, Xueqing
    Liu, Feng
    2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 454 - 461
  • [10] On the Granularity of Explanations in Model Agnostic NLP Interpretability
    Rychener, Yves
    Renard, Xavier
    Seddah, Djame
    Frossard, Pascal
    Detyniecki, Marcin
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I, 2023, 1752 : 498 - 512