An Adversarial Approach for Explainable AI in Intrusion Detection Systems

被引:0
|
作者
Marino, Daniel L. [1 ]
Wickramasinghe, Chathurika S. [1 ]
Manic, Milos [1 ]
机构
[1] Virginia Commonwealth Univ, Dept Comp Sci, Richmond, VA 23284 USA
关键词
Adversarial Machine Learning; Adversarial samples; Explainable AI; cyber-security;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Despite the growing popularity of modern machine learning techniques (e.g. Deep Neural Networks) in cyber-security applications, most of these models are perceived as a black-box for the user. Adversarial machine learning offers an approach to increase our understanding of these models. In this paper we present an approach to generate explanations for incorrect classifications made by data-driven Intrusion Detection Systems (IDSs) An adversarial approach is used to find the minimum modifications (of the input features) required to correctly classify a given set of misclassified samples. The magnitude of such modifications is used to visualize the most relevant features that explain the reason for the misclassification. The presented methodology generated satisfactory explanations that describe the reasoning behind the mis-classifications, with descriptions that match expert knowledge. The advantages of the presented methodology are: 1) applicable to any classifier with defined gradients. 2) does not require any modification of the classifier model. 3) can be extended to perform further diagnosis (e.g. vulnerability assessment) and gain further understanding of the system. Experimental evaluation was conducted on the NSL-KDD99 benchmark dataset using Linear and Multilayer perceptron classifiers. The results are shown using intuitive visualizations in order to improve the interpretability of the results.
引用
收藏
页码:3237 / 3243
页数:7
相关论文
共 50 条
  • [31] Explainable AI supported hybrid deep learnig method for layer 2 intrusion detection
    Kilincer, Ilhan Firat
    EGYPTIAN INFORMATICS JOURNAL, 2025, 30
  • [32] Deceiving Post-hoc Explainable AI (XAI) Methods in Network Intrusion Detection
    Senevirathna, Thulitha
    Siniarski, Bartlomiej
    Liyanage, Madhusanka
    Wang, Shen
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 107 - 112
  • [33] An Explainable AI approach for detecting failures in air pressure systems *
    Farea, Shawqi Mohammed
    Mumcuoglu, Mehmet Emin
    Unel, Mustafa
    ENGINEERING FAILURE ANALYSIS, 2025, 173
  • [34] Adaptable, incremental, and explainable network intrusion detection systems for internet of things
    Cerasuolo, Francesco
    Bovenzi, Giampaolo
    Ciuonzo, Domenico
    Pescape, Antonio
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 144
  • [35] On the Robustness of Intrusion Detection Systems for Vehicles Against Adversarial Attacks
    Choi, Jeongseok
    Kim, Hyoungshick
    INFORMATION SECURITY APPLICATIONS, 2021, 13009 : 39 - 50
  • [36] Adversarial Attacks on Intrusion Detection Systems Using the LSTM Classifier
    Kulikov, D. A.
    Platonov, V. V.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2021, 55 (08) : 1080 - 1086
  • [37] Adversarial Attacks on Intrusion Detection Systems Using the LSTM Classifier
    D. A. Kulikov
    V. V. Platonov
    Automatic Control and Computer Sciences, 2021, 55 : 1080 - 1086
  • [38] Adversarial Attacks Against Network Intrusion Detection in IoT Systems
    Qiu, Han
    Dong, Tian
    Zhang, Tianwei
    Lu, Jialiang
    Memmi, Gerard
    Qiu, Meikang
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (13) : 10327 - 10335
  • [39] Explainable AI for DeepFake Detection
    Mansoor, Nazneen
    Iliev, Alexander I.
    APPLIED SCIENCES-BASEL, 2025, 15 (02):
  • [40] Generative Adversarial Networks For Launching and Thwarting Adversarial Attacks on Network Intrusion Detection Systems
    Usama, Muhammad
    Asim, Muhammad
    Latif, Siddique
    Qadir, Junaid
    Ala-Al-Fuqaha
    2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2019, : 78 - 83