A Fresh Look at Sanity Checks for Saliency Maps

被引:0
|
作者
Hedstroem, Anna [1 ,2 ,3 ]
Weber, Leander [2 ]
Lapuschkin, Sebastian [2 ]
Hoehne, Marina [3 ,4 ,5 ]
机构
[1] TU Berlin, Dept Elect Engn & Comp Sci, Berlin, Germany
[2] Fraunhofer HHI, Dept Artificial Intelligence, Berlin, Germany
[3] Leibniz Inst Agr Engn & Bioecon eV ATB, UMI Lab, Potsdam, Germany
[4] BIFOLD Berlin Inst Fdn Learning & Data, Berlin, Germany
[5] Univ Potsdam, Dept Comp Sci, Potsdam, Germany
基金
欧盟地平线“2020”;
关键词
Explainability; Evaluation; Faithfulness; Quantification;
D O I
10.1007/978-3-031-63787-2_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
TheModel Parameter Randomisation Test (MPRT) is highly recognised in the eXplainable Artificial Intelligence (XAI) community due to its fundamental evaluative criterion: explanations should be sensitive to the parameters of the model they seek to explain. However, recent studies have raised several methodological concerns for the empirical interpretation ofMPRT. In response, we propose two modifications to the original test: Smooth MPRT and Efficient MPRT. The former reduces the impact of noise on evaluation outcomes via sampling, while the latter avoids the need for biased similarity measurements by re-interpreting the test through the increase in explanation complexity after full model randomisation. Our experiments show that these modifications enhance the metric reliability, facilitating a more trustworthy deployment of explanation methods.
引用
收藏
页码:403 / 420
页数:18
相关论文
共 50 条
  • [1] Sanity Checks for Saliency Maps
    Adebayo, Julius
    Gilmer, Justin
    Muelly, Michael
    Goodfellow, Ian
    Hardt, Moritz
    Kim, Been
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] More Sanity Checks for Saliency Maps
    Holmberg, Lars
    Helgstrand, Carl Johan
    Hultin, Niklas
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2022), 2022, 13515 : 175 - 184
  • [3] Sanity Checks for Saliency Metrics
    Tomsett, Richard
    Harborne, Dan
    Chakraborty, Supriyo
    Gurram, Prudhvi
    Preece, Alun
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6021 - 6029
  • [4] Sanity Checks for Saliency Methods Explaining Object Detectors
    Padmanabhan, Deepan Chakravarthi
    Ploeger, Paul G.
    Arriaga, Octavio
    Valdenegro-Toro, Matias
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 438 - 455
  • [5] Calibration: Sanity Checks
    Reu, Phillip
    EXPERIMENTAL TECHNIQUES, 2014, 38 (02) : 1 - 2
  • [6] Calibration: Sanity checks
    Phillip Reu
    Experimental Techniques, 2014, 38 : 1 - 2
  • [7] Sanity checks in formal verification
    Kupferman, Orna
    CONCUR 2006 - CONCURRENCY THEORY, PROCEEDINGS, 2006, 4137 : 37 - 51
  • [8] Sanity Simulations for Saliency Methods
    Kim, Joon Sik
    Plumb, Gregory
    Talwalkar, Ameet
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] Looks Good To Me: Visualizations As Sanity Checks
    Correll, Michael
    Li, Mingwei
    Kindlmann, Gordon
    Scheidegger, Carlos
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2019, 25 (01) : 830 - 839
  • [10] First experiences with the LHC BLM sanity checks
    Emery, J.
    Dehning, B.
    Effinger, E.
    Nordt, A.
    Sapinski, M. G.
    Zamantzas, C.
    JOURNAL OF INSTRUMENTATION, 2010, 5