A Framework for Enhancing Deep Neural Networks Against Adversarial Malware

被引:0
|
作者
Li, Deqiang [1 ]
Li, Qianmu [1 ,2 ]
Ye, Yanfang [3 ]
Xu, Shouhuai [4 ,5 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[2] Wuyi Univ, Sch Intelligent Mfg, Nanping 529020, Peoples R China
[3] Case Western Reserve Univ, Dept Comp & Data Sci, Cleveland, OH 44106 USA
[4] Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX USA
[5] Univ Colorado, Dept Comp Sci, Colorado Springs, CO 80918 USA
基金
国家重点研发计划; 美国国家科学基金会;
关键词
Malware; Training; Robustness; Perturbation methods; Neural networks; Harmonic analysis; Feature extraction; Adversarial machine learning; adversarial malware detection; deep neural networks; malware classification;
D O I
10.1109/TNSE.2021.3051354
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Machine learning-based malware detection is known to be vulnerable to adversarial evasion attacks. The state-of-the-art is that there are no effective defenses against these attacks. As a response to the adversarial malware classification challenge organized by the MIT Lincoln Lab and associated with the AAAI-19 Workshop on Artificial Intelligence for Cyber Security (AICS'2019), we propose six guiding principles to enhance the robustness of deep neural networks. Some of these principles have been scattered in the literature, but the others are introduced in this paper for the first time. Under the guidance of these six principles, we propose a defense framework to enhance the robustness of deep neural networks against adversarial malware evasion attacks. By conducting experiments with the Drebin Android malware dataset, we show that the framework can achieve a 98.49% accuracy (on average) against grey-box attacks, where the attacker knows some information about the defense and the defender knows some information about the attack, and an 89.14% accuracy (on average) against the more capable white-box attacks, where the attacker knows everything about the defense and the defender knows some information about the attack. The framework wins the AICS'2019 challenge by achieving a 76.02% accuracy, where neither the attacker (i.e., the challenge organizer) knows the framework or defense nor we (the defender) know the attacks. This gap highlights the importance of knowing about the attack.
引用
收藏
页码:736 / 750
页数:15
相关论文
共 50 条
  • [1] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [2] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    [J]. SYMMETRY-BASEL, 2021, 13 (03):
  • [3] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [4] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [5] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE Access, 2022, 10 : 33602 - 33615
  • [6] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    [J]. CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [7] Enhancing the Robustness of Deep Neural Networks by Meta-Adversarial Training
    Chang, You-Kang
    Zhao, Hong
    Wang, Wei-Jie
    [J]. International Journal of Network Security, 2023, 25 (01) : 122 - 130
  • [8] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [9] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    [J]. 2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [10] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141