Secure machine learning against adversarial samples at test time

被引:0
|
作者
Jing Lin
Laurent L. Njilla
Kaiqi Xiong
机构
[1] University of South Florida,ICNS Lab and Cyber Florida
[2] U.S. Air Force Research Laboratory,Cyber Assurance Branch
关键词
Machine learning; Adversarial examples; Deep learning (DL);
D O I
暂无
中图分类号
学科分类号
摘要
Deep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance. However, recent studies on adversarial examples, which have maliciously undetectable perturbations added to their original samples that are indistinguishable by human eyes but mislead the machine learning approaches, show that machine learning models are vulnerable to security attacks. Though various adversarial retraining techniques have been developed in the past few years, none of them is scalable. In this paper, we propose a new iterative adversarial retraining approach to robustify the model and to reduce the effectiveness of adversarial inputs on DNN models. The proposed method retrains the model with both Gaussian noise augmentation and adversarial generation techniques for better generalization. Furthermore, the ensemble model is utilized during the testing phase in order to increase the robust test accuracy. The results from our extensive experiments demonstrate that the proposed approach increases the robustness of the DNN model against various adversarial attacks, specifically, fast gradient sign attack, Carlini and Wagner (C&W) attack, Projected Gradient Descent (PGD) attack, and DeepFool attack. To be precise, the robust classifier obtained by our proposed approach can maintain a performance accuracy of 99% on average on the standard test set. Moreover, we empirically evaluate the runtime of two of the most effective adversarial attacks, i.e., C&W attack and BIM attack, to find that the C&W attack can utilize GPU for faster adversarial example generation than the BIM attack can. For this reason, we further develop a parallel implementation of the proposed approach. This parallel implementation makes the proposed approach scalable for large datasets and complex models.
引用
收藏
相关论文
共 50 条
  • [1] Secure machine learning against adversarial samples at test time
    Lin, Jing
    Njilla, Laurent L.
    Xiong, Kaiqi
    [J]. EURASIP JOURNAL ON INFORMATION SECURITY, 2022, 2022 (01)
  • [2] Robust Machine Learning against Adversarial Samples at Test Time
    Lin, Jing
    Njilla, Laurent L.
    Xiong, Kaiqi
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [3] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    [J]. IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [4] Adversarial Machine Learning Against Digital Watermarking
    Quiring, Erwin
    Rieck, Konrad
    [J]. 2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 519 - 523
  • [5] Secure and Resilient Distributed Machine Learning Under Adversarial Environments
    Zhang, Rui
    Zhu, Quanyan
    [J]. 2015 18TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2015, : 644 - 651
  • [6] Secure and Resilient Distributed Machine Learning Under Adversarial Environments
    Zhang, Rui
    Zhu, Quanyan
    [J]. IEEE AEROSPACE AND ELECTRONIC SYSTEMS MAGAZINE, 2016, 31 (03) : 34 - 36
  • [7] Adversarial Machine Learning for Protecting Against Online Manipulation
    Cresci, Stefano
    Petrocchi, Marinella
    Spognardi, Angelo
    Tognazzi, Stefano
    [J]. IEEE INTERNET COMPUTING, 2022, 26 (02) : 47 - 52
  • [8] A Moving Target Defense against Adversarial Machine Learning
    Roy, Abhishek
    Chhabra, Anshuman
    Kamhoua, Charles A.
    Mohapatra, Prasant
    [J]. SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 383 - 388
  • [9] Making Machine Learning Robust Against Adversarial Inputs
    Goodfellow, Ian
    McDaniel, Patrick
    Papernot, Nicolas
    [J]. COMMUNICATIONS OF THE ACM, 2018, 61 (07) : 56 - 66
  • [10] Securing Pervasive Systems Against Adversarial Machine Learning
    Lagesse, Brent
    Burkard, Cody
    Perez, Julio
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATION WORKSHOPS (PERCOM WORKSHOPS), 2016,