Explaining anomalies detected by autoencoders using Shapley Additive Explanations

被引:167
|
作者
Antwarg, Liat [1 ]
Miller, Ronnie Mindlin [1 ]
Shapira, Bracha [1 ]
Rokach, Lior [1 ]
机构
[1] Ben Gurion Univ Negev, Dept Informat & Software Syst Engn, Beer Sheva, Israel
关键词
Explainable black-box models; XAI; Autoencoder; Shapley values; SHAP; Anomaly detection; NETWORK;
D O I
10.1016/j.eswa.2021.115736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning algorithms for anomaly detection, such as autoencoders, point out the outliers, saving experts the time-consuming task of examining normal cases in order to find anomalies. Most outlier detection algorithms output a score for each instance in the database. The top-k most intense outliers are returned to the user for further inspection; however, the manual validation of results becomes challenging without justification or additional clues. An explanation of why an instance is anomalous enables the experts to focus their investigation on the most important anomalies and may increase their trust in the algorithm. Recently, a game theory-based framework known as SHapley Additive exPlanations (SHAP) was shown to be effective in explaining various supervised learning models. In this paper, we propose a method that uses Kernel SHAP to explain anomalies detected by an autoencoder, which is an unsupervised model. The proposed explanation method aims to provide a comprehensive explanation to the experts by focusing on the connection between the features with high reconstruction error and the features that are most important in terms of their affect on the reconstruction error. We propose a black-box explanation method, because it has the advantage of being able to explain any autoencoder without being aware of the exact architecture of the autoencoder model. The proposed explanation method extracts and visually depicts both features that contribute the most to the anomaly and those that offset it. An expert evaluation using real-world data demonstrates the usefulness of the proposed method in helping domain experts better understand the anomalies. Our evaluation of the explanation method, in which a "perfect"autoencoder is used as the ground truth, shows that the proposed method explains anomalies correctly, using the exact features, and evaluation on real-data demonstrates that (1) our explanation model, which uses SHAP, is more robust than the Local Interpretable Model-agnostic Explanations (LIME) method, and (2) the explanations our method provides are more effective at reducing the anomaly score than other methods.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Shapley Additive Explanations of Multigeometrical Variable Coupling Effect in Transonic Compressor
    Wang, Junying
    He, Xiao
    Wang, Baotong
    Zheng, Xinqian
    JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER-TRANSACTIONS OF THE ASME, 2022, 144 (04):
  • [32] Topology Optimization With Shapley Additive Explanations for Permanent Magnet Synchronous Motors
    Sasaki, Hidenori
    Yamamura, Koichi
    IEEE TRANSACTIONS ON MAGNETICS, 2024, 60 (03) : 1 - 4
  • [33] Explainable Anomaly Detection for District Heating Based on Shapley Additive Explanations
    Park, Sungwoo
    Moon, Jihoon
    Hwang, Eenjun
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2020), 2020, : 762 - 765
  • [34] Investigation of feature contribution to shield tunneling-induced settlement using Shapley additive explanations method
    K.K.Pabodha M.Kannangara
    Wanhuan Zhou
    Zhi Ding
    Zhehao Hong
    Journal of Rock Mechanics and Geotechnical Engineering, 2022, 14 (04) : 1052 - 1063
  • [35] Epidemiological exploration of the impact of bluetooth headset usage on thyroid nodules using Shapley additive explanations method
    Zhou, Nan
    Qin, Wei
    Zhang, Jia-Jin
    Wang, Yun
    Wen, Jian-Sheng
    Lim, Yang Mooi
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [36] Prediction of HHV of fuel by Machine learning Algorithm: Interpretability analysis using Shapley Additive Explanations (SHAP)
    Timilsina, Manish Sharma
    Sen, Subhadip
    Uprety, Bibek
    Patel, Vashishtha B.
    Sharma, Prateek
    Sheth, Pratik N.
    FUEL, 2024, 357
  • [37] Using Shapley additive explanations to interpret extreme gradient boosting predictions of grassland degradation in Xilingol, China
    Batunacun
    Wieland, Ralf
    Lakes, Tobia
    Nendel, Claas
    GEOSCIENTIFIC MODEL DEVELOPMENT, 2021, 14 (03) : 1493 - 1510
  • [38] Prediction of HHV of fuel by Machine learning Algorithm: Interpretability analysis using Shapley Additive Explanations (SHAP)
    Timilsina, Manish Sharma
    Sen, Subhadip
    Uprety, Bibek
    Patel, Vashishtha B.
    Sharma, Prateek
    Sheth, Pratik N.
    FUEL, 2024, 357
  • [39] Parametric Analysis for Torque Prediction in Friction Stir Welding Using Machine Learning and Shapley Additive Explanations
    Belalia, Sif Eddine
    Serier, Mohamed
    Al-Sabur, Raheem
    JOURNAL OF COMPUTATIONAL APPLIED MECHANICS, 2024, 55 (01): : 113 - 124
  • [40] Investigation of feature contribution to shield tunneling-induced settlement using Shapley additive explanations method
    Kannangara, K. K. Pabodha M.
    Zhou, Wanhuan
    Ding, Zhi
    Hong, Zhehao
    JOURNAL OF ROCK MECHANICS AND GEOTECHNICAL ENGINEERING, 2022, 14 (04) : 1052 - 1063