SoK: Realistic adversarial attacks and defenses for intelligent network intrusion detection

被引:12
|
作者
Vitorino, Joao [1 ]
Praca, Isabel [1 ]
Maia, Eva [1 ]
机构
[1] Polytech Porto ISEP IPP, Sch Engn, Res Grp Intelligent Engn & Comp Adv Innovat & Dev, P-4249015 Porto, Portugal
关键词
Realistic adversarial examples; Adversarial robustness; Cybersecurity; Intrusion detection; Machine learning; ROBUSTNESS; SYSTEMS;
D O I
10.1016/j.cose.2023.103433
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine Learning (ML) can be incredibly valuable to automate anomaly detection and cyber-attack classification, improving the way that Network Intrusion Detection (NID) is performed. However, despite the benefits of ML models, they are highly susceptible to adversarial cyber-attack examples specifically crafted to exploit them. A wide range of adversarial attacks have been created and researchers have worked on various defense strategies to safeguard ML models, but most were not intended for the specific constraints of a communication network and its communication protocols, so they may lead to unrealistic examples in the NID domain. This Systematization of Knowledge (SoK) consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples and could be used in ML development and deployment scenarios with real network traffic flows. This SoK also describes the open challenges regarding the use of adversarial ML in the NID domain, defines the fundamental properties that are required for an adversarial example to be realistic, and provides guidelines for researchers to ensure that their experiments are adequate for a real communication network.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Evading Deep Reinforcement Learning-based Network Intrusion Detection with Adversarial Attacks
    Merzouk, Mohamed Amine
    Delas, Josephine
    Neal, Christopher
    Cuppens, Frederic
    Boulahia-Cuppens, Nora
    Yaich, Reda
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY AND SECURITY, ARES 2022, 2022,
  • [22] Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey
    Alatwi, Huda Ali
    Aldweesh, Amjad
    2021 IEEE WORLD AI IOT CONGRESS (AIIOT), 2021, : 34 - 40
  • [23] Adversarial Attacks for Intrusion Detection Based on Bus Traffic
    He, Daojing
    Dai, Jiayu
    Liu, Xiaoxia
    Zhu, Shanshan
    Chan, Sammy
    Guizani, Mohsen
    IEEE NETWORK, 2022, 36 (04): : 203 - 209
  • [24] Adversarial attacks against supervised machine learning based network intrusion detection systems
    Alshahrani, Ebtihaj
    Alghazzawi, Daniyal
    Alotaibi, Reem
    Rabie, Osama
    PLOS ONE, 2022, 17 (10):
  • [25] Adversarial NLP for Social Network Applications: Attacks, Defenses, and Research Directions
    Alsmadi, Izzat
    Ahmad, Kashif
    Nazzal, Mahmoud
    Alam, Firoj
    Al-Fuqaha, Ala
    Khreishah, Abdallah
    Algosaibi, Abdulelah
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (06) : 3089 - 3108
  • [26] XAI-driven Adversarial Attacks on Network Intrusion Detectors
    Okada, Satoshi
    Jmila, Houda
    Akashi, Kunio
    Mitsunaga, Takuho
    Sekiya, Yuji
    Takase, Hideki
    Blanc, Gregory
    Nakamura, Hiroshi
    PROCEEDINGS OF THE 2024 EUROPEAN INTERDISCIPLINARY CYBERSECURITY CONFERENCE, EICC 2024, 2024, : 65 - 73
  • [27] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    ENGINEERING, 2020, 6 (03) : 346 - 360
  • [28] DeepRobust: a Platform for Adversarial Attacks and Defenses
    Li, Yaxin
    Jin, Wei
    Xu, Han
    Tang, Jiliang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 16078 - 16080
  • [29] Constrained optimization based adversarial example generation for transfer attacks in network intrusion detection systems
    Chale, Marc
    Cox, Bruce
    Weir, Jeffery
    Bastian, Nathaniel D.
    OPTIMIZATION LETTERS, 2024, 18 (09) : 2169 - 2188
  • [30] On Adaptive Attacks to Adversarial Example Defenses
    Tramer, Florian
    Carlini, Nicholas
    Brendel, Wieland
    Madry, Aleksander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33