Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks

被引:61
|
作者
Apruzzese, Giovanni [1 ]
Andreolini, Mauro [2 ]
Marchetti, Mirco [3 ]
Venturi, Andrea [3 ]
Colajanni, Michele [4 ]
机构
[1] Univ Liechtenstein, Hilti Chair Data & Applicat Secur, FL-9490 Vaduz, Liechtenstein
[2] Univ Modena & Reggio Emilia, Dept Phys Comp Sci & Math, I-41121 Modena, Italy
[3] Univ Modena & Reggio Emilia, Dept Engn Enzo Ferrari, I-41121 Modena, Italy
[4] Univ Bologna, Dept Informat Sci & Engn, I-40126 Bologna, Italy
来源
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT | 2020年 / 17卷 / 04期
关键词
Detectors; Botnet; Training; Computer security; Machine learning; Feature extraction; Perturbation methods; Adversarial attack; machine learning; network intrusion detection; deep reinforcement learning; botnet; INTRUSION;
D O I
10.1109/TNSM.2020.3031843
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As cybersecurity detectors increasingly rely on machine learning mechanisms, attacks to these defenses escalate as well. Supervised classifiers are prone to adversarial evasion, and existing countermeasures suffer from many limitations. Most solutions degrade performance in the absence of adversarial perturbations; they are unable to face novel attack variants; they are applicable only to specific machine learning algorithms. We propose the first framework that can protect botnet detectors from adversarial attacks through deep reinforcement learning mechanisms. It automatically generates realistic attack samples that can evade detection, and it uses these samples to produce an augmented training set for producing hardened detectors. In such a way, we obtain more resilient detectors that can work even against unforeseen evasion attacks with the great merit of not penalizing their performance in the absence of specific attacks. We validate our proposal through an extensive experimental campaign that considers multiple machine learning algorithms and public datasets. The results highlight the improvements of the proposed solution over the state-of-the-art. Our method paves the way to novel and more robust cybersecurity detectors based on machine learning applied to network traffic analytics.
引用
收藏
页码:1975 / 1987
页数:13
相关论文
共 50 条
  • [1] DReLAB - Deep REinforcement Learning Adversarial Botnet: A benchmark dataset for adversarial attacks against botnet Intrusion Detection Systems
    Venturi, Andrea
    Apruzzese, Giovanni
    Andreolini, Mauro
    Colajanni, Michele
    Marchetti, Mirco
    DATA IN BRIEF, 2021, 34
  • [2] Deep reinforcement learning based Evasion Generative Adversarial Network for botnet detection
    Randhawa, Rizwan Hamid
    Aslam, Nauman
    Alauthman, Mohammad
    Khalid, Muhammad
    Rafiq, Husnain
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 150 : 294 - 302
  • [3] Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning
    Sun, Jianwen
    Zhang, Tianwei
    Xie, Xiaofei
    Ma, Lei
    Zheng, Yan
    Chen, Kangjie
    Liu, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5883 - 5891
  • [4] Evasion and Causative Attacks with Adversarial Deep Learning
    Shi, Yi
    Sagduyu, Yalin E.
    MILCOM 2017 - 2017 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2017, : 243 - 248
  • [5] Evasion Attacks with Adversarial Deep Learning Against Power System State Estimation
    Sayghe, Ali
    Zhao, Junbo
    Konstantinou, Charalambos
    2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
  • [6] ACADIA: Efficient and Robust Adversarial Attacks Against Deep Reinforcement Learning
    Ali, Haider
    Al Ameedi, Mohannad
    Swami, Ananthram
    Ning, Rui
    Li, Jiang
    Wu, Hongyi
    Cho, Jin-Hee
    2022 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2022, : 1 - 9
  • [7] On the Limitations of Targeted Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition
    Bair, Samuel
    DelVecchio, Matthew
    Flowers, Bryse
    Michaels, Alan J.
    Headley, William C.
    PROCEEDINGS OF THE 2019 ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING (WISEML '19), 2019, : 25 - 30
  • [8] Enhanced Adversarial Strategically-Timed Attacks Against Deep Reinforcement Learning
    Yang, Chao-Han Huck
    Qi, Jun
    Chen, Pin-Yu
    Ouyang, Yi
    Hung, I-Te Danny
    Lee, Chin-Hui
    Ma, Xiaoli
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2020, 2020-May : 3407 - 3411
  • [9] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [10] Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning
    Akers, Matthew
    Barton, Armon
    COMPUTER, 2024, 57 (01) : 88 - 99