Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection

被引:0
|
作者
Imran, Muhammad [1 ]
Appice, Annalisa [1 ,2 ]
Malerba, Donato [1 ,2 ]
机构
[1] Univ Bari Aldo Moro, Dept Comp Sci, Via Orabona 4, I-70125 Bari, Italy
[2] Consorzio Interuniv Nazl Informat CINI, Via Orabona 4, I-70125 Bari, Italy
关键词
Windows PE Malware; adversarial attacks; integrity violation; transferability; adversarial training; explainable artificial intelligence; DEFENSES;
D O I
10.3390/fi16050168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions.
引用
收藏
页数:30
相关论文
共 50 条
  • [1] Adversarial attacks against Windows PE malware detection: A survey of the state-of-the-art
    Ling, Xiang
    Wu, Lingfei
    Zhang, Jiangyu
    Qu, Zhenqing
    Deng, Wei
    Chen, Xiang
    Qian, Yaguan
    Wu, Chunming
    Ji, Shouling
    Luo, Tianyue
    Wu, Jingzheng
    Wu, Yanjun
    COMPUTERS & SECURITY, 2023, 128
  • [2] Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
    Demetrio, Luca
    Biggio, Battista
    Roli, Fabio
    IEEE SECURITY & PRIVACY, 2022, 20 (05) : 77 - 85
  • [3] Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection
    Demetrio, Luca
    Coull, Scott E.
    Biggio, Battista
    Lagorio, Giovanni
    Armando, Alessandro
    Roli, Fabio
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2021, 24 (04)
  • [4] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [5] Detection of different windows PE malware using machine learning methods
    Kocak, Aynur
    Sogut, Esra
    Alkan, Mustafa
    Erdem, O. Ayhan
    JOURNAL OF POLYTECHNIC-POLITEKNIK DERGISI, 2023, 26 (03): : 1185 - 1197
  • [6] Defending malware detection models against evasion based adversarial attacks
    Rathore, Hemant
    Sasan, Animesh
    Sahay, Sanjay K.
    Sewak, Mohit
    PATTERN RECOGNITION LETTERS, 2022, 164 : 119 - 125
  • [7] Robust Malware Detection Models: Learning from Adversarial Attacks and Defenses
    Rathore, Hemant
    Samavedhi, Adithya
    Sahay, Sanjay K.
    Sewak, Mohit
    FORENSIC SCIENCE INTERNATIONAL-DIGITAL INVESTIGATION, 2021, 37
  • [8] Protection against Adversarial Attacks on Malware Detectors Using Machine Learning Algorithms
    Marshev, I. I.
    Zhukovskii, E., V
    Aleksandrova, E. B.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2021, 55 (08) : 1025 - 1028
  • [9] Effectiveness of machine learning based android malware detectors against adversarial attacks
    Jyothish, A.
    Mathew, Ashik
    Vinod, P.
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (03): : 2549 - 2569
  • [10] Protection against Adversarial Attacks on Malware Detectors Using Machine Learning Algorithms
    I. I. Marshev
    E. V. Zhukovskii
    E. B. Aleksandrova
    Automatic Control and Computer Sciences, 2021, 55 : 1025 - 1028