An Adversarial Approach for Explainable AI in Intrusion Detection Systems

被引:0
|
作者
Marino, Daniel L. [1 ]
Wickramasinghe, Chathurika S. [1 ]
Manic, Milos [1 ]
机构
[1] Virginia Commonwealth Univ, Dept Comp Sci, Richmond, VA 23284 USA
关键词
Adversarial Machine Learning; Adversarial samples; Explainable AI; cyber-security;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Despite the growing popularity of modern machine learning techniques (e.g. Deep Neural Networks) in cyber-security applications, most of these models are perceived as a black-box for the user. Adversarial machine learning offers an approach to increase our understanding of these models. In this paper we present an approach to generate explanations for incorrect classifications made by data-driven Intrusion Detection Systems (IDSs) An adversarial approach is used to find the minimum modifications (of the input features) required to correctly classify a given set of misclassified samples. The magnitude of such modifications is used to visualize the most relevant features that explain the reason for the misclassification. The presented methodology generated satisfactory explanations that describe the reasoning behind the mis-classifications, with descriptions that match expert knowledge. The advantages of the presented methodology are: 1) applicable to any classifier with defined gradients. 2) does not require any modification of the classifier model. 3) can be extended to perform further diagnosis (e.g. vulnerability assessment) and gain further understanding of the system. Experimental evaluation was conducted on the NSL-KDD99 benchmark dataset using Linear and Multilayer perceptron classifiers. The results are shown using intuitive visualizations in order to improve the interpretability of the results.
引用
收藏
页码:3237 / 3243
页数:7
相关论文
共 50 条
  • [1] Detection of Adversarial Attacks in AI-Based Intrusion Detection Systems Using Explainable AI
    Tcydenova, Erzhena
    Kim, Tae Woo
    Lee, Changhoon
    Park, Jong Hyuk
    Human-centric Computing and Information Sciences, 2021, 11
  • [2] Detection of Adversarial Attacks in AI-Based Intrusion Detection Systems Using Explainable AI
    Tcydenova, Erzhena
    Kim, Tae Woo
    Lee, Changhoon
    Park, Jong Hyuk
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2021, 11
  • [3] Adversarial Attack Detection Approach for Intrusion Detection Systems
    Degirmenci, Elif
    Ozcelik, Ilker
    Yazici, Ahmet
    IEEE Access, 2024, 12 : 195996 - 196009
  • [4] Enhancing Intrusion Detection with Explainable AI: A Transparent Approach to Network Security
    Mallampati, Seshu Bhavani
    Seetha, Hari
    CYBERNETICS AND INFORMATION TECHNOLOGIES, 2024, 24 (01) : 98 - 117
  • [5] An adversarial attack approach for eXplainable AI evaluation on deepfake detection models
    Gowrisankar, Balachandar
    Thing, Vrizlynn L. L.
    COMPUTERS & SECURITY, 2024, 139
  • [6] Adversarial Robust and Explainable Network Intrusion Detection Systems Based on Deep Learning
    Sauka, Kudzai
    Shin, Gun-Yoo
    Kim, Dong-Wook
    Han, Myung-Mook
    APPLIED SCIENCES-BASEL, 2022, 12 (13):
  • [7] Evaluation of Applying Federated Learning to Distributed Intrusion Detection Systems Through Explainable AI
    Oki, Ayaka
    Ogawa, Yukio
    Ota, Kaoru
    Dong, Mianxiong
    IEEE Networking Letters, 2024, 6 (03): : 198 - 202
  • [8] Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection
    Gyawali, Sohan
    Huang, Jiaqi
    Jiang, Yili
    2024 19TH ANNUAL SYSTEM OF SYSTEMS ENGINEERING CONFERENCE, SOSE 2024, 2024, : 92 - 97
  • [9] Explainable AI-based Intrusion Detection in the Internet of Things
    Siganos, Marios
    Radoglou-Grammatikis, Panagiotis
    Kotsiuba, Igor
    Markakis, Evangelos
    Moscholios, Ioannis
    Goudos, Sotirios
    Sarigiannidis, Panagiotis
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [10] Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi-Layer Perceptron
    Gaspar, Diogo
    Silva, Paulo
    Silva, Catarina
    IEEE ACCESS, 2024, 12 : 30164 - 30175