Evidential classification for defending against adversarial attacks on network traffic

被引:5
|
作者
Beechey, Matthew [1 ]
Lambotharan, Sangarapillai [1 ]
Kyriakopoulos, Konstantinos G. [1 ]
机构
[1] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough, Leics, England
关键词
Dempster-Shafer Theory; Evidence theory; Evidential classification; Neural Networks; Adversarial machine learning; Network security;
D O I
10.1016/j.inffus.2022.11.024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research interest in demonstrating vulnerability of Machine Learning (ML) algorithms against sophisticated Adversarial Machine Learning (AML) perturbation attacks has become more prominent in recent years. Adversarial attacks perturb dataset instances by finding the nearest decision boundary and moving the instance values towards the boundary. Thus, a popular challenge in this field is combating such adversarial attacks by increasing model accuracy. Making a model more robust often requires the ML engineer to have preemptive knowledge not only that an adversarial attack will occur, but also which attack will occur. This work is the first to reinforce a Neural Network (NN) model in a network security environment against AML attacks by leveraging an evidential classification approach. Evidential approaches allow for measuring an extra degree of insight of uncertainty between features to enable classification of ambiguous instances as uncertain. Crucially, the proposed approach does not require any training of perturbed datasets or any knowledge that an adversarial attack may take place. Recent advances in making ML models more robust against single-step adversarial attacks have been greatly successful, but researchers have found greater issue in making their models more robust against complex, iterative attacks. The proposed approach is evaluated using a modern network security dataset, and compared against a conventional Bayesian NN. Rather than training a model to increase Accuracy, the proposed approach aims to reduce the misclassification rate of perturbed data. By allowing instances in a dataset to be classified as uncertain, comparing against a conventional NN, the proposed approach produces results that decrease the misclassification rates on the two perturbed malicious classes from 70.53% to 13.09%, and from 99.67% to 1.33%, respectively.
引用
收藏
页码:115 / 126
页数:12
相关论文
共 50 条
  • [1] Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification
    McCarthy, Andrew
    Ghadafi, Essam
    Andriotis, Panagiotis
    Legg, Phil
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 72
  • [2] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [3] Defending Against Adversarial Attacks on Time-series with Selective Classification
    Kuehne, Joana
    Guehmann, Clemens
    [J]. 2022 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE, PHM-LONDON 2022, 2022, : 169 - 175
  • [4] Defending network intrusion detection systems against adversarial evasion attacks
    Pawlicki, Marek
    Choras, Michal
    Kozik, Rafal
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 110 : 148 - 154
  • [5] DEFENDING AGAINST ADVERSARIAL ATTACKS ON MEDICAL IMAGING AI SYSTEM, CLASSIFICATION OR DETECTION?
    Li, Xin
    Pan, Deng
    Zhu, Dongxiao
    [J]. 2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 1677 - 1681
  • [6] Defending against adversarial attacks by randomized diversification
    Taran, Olga
    Rezaeifar, Shideh
    Holotyak, Taras
    Voloshynovskiy, Slava
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11218 - 11225
  • [7] Defending Distributed Systems Against Adversarial Attacks
    Su, Lili
    [J]. Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [8] Fuzzy classification boundaries against adversarial network attacks
    Iglesias, Felix
    Milosevic, Jelena
    Zseby, Tanja
    [J]. FUZZY SETS AND SYSTEMS, 2019, 368 : 20 - 35
  • [9] Evaluating Resilience of Encrypted Traffic Classification against Adversarial Evasion Attacks
    Maarouf, Ramy
    Sattar, Danish
    Matrawy, Ashraf
    [J]. 26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
  • [10] Effects of and Defenses Against Adversarial Attacks on a Traffic Light Classification CNN
    Wan, Morris
    Han, Meng
    Li, Lin
    Li, Zhigang
    He, Selena
    [J]. ACMSE 2020: PROCEEDINGS OF THE 2020 ACM SOUTHEAST CONFERENCE, 2020, : 94 - 99