共 50 条
A Fresh Look at Sanity Checks for Saliency Maps
被引:0
|作者:
Hedstroem, Anna
[1
,2
,3
]
Weber, Leander
[2
]
Lapuschkin, Sebastian
[2
]
Hoehne, Marina
[3
,4
,5
]
机构:
[1] TU Berlin, Dept Elect Engn & Comp Sci, Berlin, Germany
[2] Fraunhofer HHI, Dept Artificial Intelligence, Berlin, Germany
[3] Leibniz Inst Agr Engn & Bioecon eV ATB, UMI Lab, Potsdam, Germany
[4] BIFOLD Berlin Inst Fdn Learning & Data, Berlin, Germany
[5] Univ Potsdam, Dept Comp Sci, Potsdam, Germany
来源:
基金:
欧盟地平线“2020”;
关键词:
Explainability;
Evaluation;
Faithfulness;
Quantification;
D O I:
10.1007/978-3-031-63787-2_21
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
TheModel Parameter Randomisation Test (MPRT) is highly recognised in the eXplainable Artificial Intelligence (XAI) community due to its fundamental evaluative criterion: explanations should be sensitive to the parameters of the model they seek to explain. However, recent studies have raised several methodological concerns for the empirical interpretation ofMPRT. In response, we propose two modifications to the original test: Smooth MPRT and Efficient MPRT. The former reduces the impact of noise on evaluation outcomes via sampling, while the latter avoids the need for biased similarity measurements by re-interpreting the test through the increase in explanation complexity after full model randomisation. Our experiments show that these modifications enhance the metric reliability, facilitating a more trustworthy deployment of explanation methods.
引用
收藏
页码:403 / 420
页数:18
相关论文