Securing Network Traffic Classification Models against Adversarial Examples Using Derived Variables

被引:1
|
作者
Adeke, James Msughter [1 ,2 ]
Liu, Guangjie [1 ,2 ]
Zhao, Junjie [1 ,2 ]
Wu, Nannan [3 ]
Bashir, Hafsat Muhammad [1 ]
Davoli, Franco
机构
[1] Nanjing Univ Informat Sci Technol, Sch Elect & Informat Engn, Nanjing 210044, Peoples R China
[2] Minist Educ, Key Lab Intelligent Support Technol Complex Enviro, Nanjing 210044, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
关键词
machine learning; adversarial attack; network traffic classification; derived variables; robustness; INTRUSION DETECTION;
D O I
10.3390/fi15120405
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training is an effective defense method against such attacks but relies on access to a substantial number of AEs, a prerequisite that entails significant computational resources and the inherent limitation of poor performance on clean data. To address these problems, this study proposes a novel approach to improve the robustness of ML-based network traffic classification models by integrating derived variables (DVars) into training. Unlike adversarial training, our approach focuses on enhancing training using DVars, introducing randomness into the input data. DVars are generated from the baseline dataset and significantly improve the resilience of the model to AEs. To evaluate the effectiveness of DVars, experiments were conducted using the CSE-CIC-IDS2018 dataset and three state-of-the-art ML-based models: decision tree (DT), random forest (RF), and k-neighbors (KNN). The results show that DVars can improve the accuracy of KNN under attack from 0.45% to 0.84% for low-intensity attacks and from 0.32% to 0.66% for high-intensity attacks. Furthermore, both DT and RF achieve a significant increase in accuracy when subjected to attack of different intensity. Moreover, DVars are computationally efficient, scalable, and do not require access to AEs.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Privacy Risks of Securing Machine Learning Models against Adversarial Examples
    Song, Liwei
    Shokri, Reza
    Mittal, Prateek
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 241 - 257
  • [2] A Novel Way to Generate Adversarial Network Traffic Samples against Network Traffic Classification
    Hu, Yongjin
    Tian, Jin
    Ma, Jun
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [3] Evidential classification for defending against adversarial attacks on network traffic
    Beechey, Matthew
    Lambotharan, Sangarapillai
    Kyriakopoulos, Konstantinos G.
    [J]. INFORMATION FUSION, 2023, 92 : 115 - 126
  • [4] Towards universal and transferable adversarial attacks against network traffic classification
    Ding, Ruiyang
    Sun, Lei
    Zang, Weifei
    Dai, Leyu
    Ding, Zhiyi
    Xu, Bayi
    [J]. Computer Networks, 2024, 254
  • [5] Discriminative Manifold Learning Network using Adversarial Examples for Image Classification
    Zhang, Yuan
    Shi, Biming
    [J]. JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2018, 13 (05) : 2099 - 2106
  • [6] Generative Adversarial Classification Network with Application to Network Traffic Classification
    Ghanavi, Rozhina
    Liang, Ben
    Tizghadam, Ali
    [J]. 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [7] A Robust Approach for Securing Audio Classification Against Adversarial Attacks
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 2147 - 2159
  • [8] Countermeasures Against Adversarial Examples in Radio Signal Classification
    Zhang, Lu
    Lambotharan, Sangarapillai
    Zheng, Gan
    AsSadhan, Basil
    Roli, Fabio
    [J]. IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (08) : 1830 - 1834
  • [9] Robust Optimal Classification Trees against Adversarial Examples
    Vos, Daniel
    Verwer, Sicco
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8520 - 8528
  • [10] Network Traffic Obfuscation against Traffic Classification
    Liu, Likun
    Yu, Haining
    Yu, Shilin
    Yu, Xiangzhan
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022