Defense Against Adversarial Attacks in Deep Learning

被引:5
|
作者
Li, Yuancheng [1 ]
Wang, Yimeng [1 ]
机构
[1] North China Elect Power Univ, Sch Control & Comp Engn, Beijing 102206, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2019年 / 9卷 / 01期
关键词
adversarial attacks; deep learning; face recognition; Wasserstein generative adversarial networks (W-GAN); denoiser; knowledge transfer; NEURAL-NETWORKS;
D O I
10.3390/app9010076
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Neural networks are very vulnerable to adversarial examples, which threaten their application in security systems, such as face recognition, and autopilot. In response to this problem, we propose a new defensive strategy. In our strategy, we propose a new deep denoising neural network, which is called UDDN, to remove the noise on adversarial samples. The standard denoiser suffers from the amplification effect, in which the small residual adversarial noise gradually increases and leads to misclassification. The proposed denoiser overcomes this problem by using a special loss function, which is defined as the difference between the model outputs activated by the original image and denoised image. At the same time, we propose a new model training algorithm based on knowledge transfer, which can resist slight image disturbance and make the model generalize better around the training samples. Our proposed defensive strategy is robust against both white-box or black-box attacks. Meanwhile, the strategy is applicable to any deep neural network-based model. In the experiment, we apply the defensive strategy to a face recognition model. The experimental results show that our algorithm can effectively resist adversarial attacks and improve the accuracy of the model.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Defensive Bit Planes: Defense Against Adversarial Attacks
    Tripathi, Achyut Mani
    Behera, Swarup Ranjan
    Paul, Konark
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [42] Cyclic Defense GAN Against Speech Adversarial Attacks
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro Lameiras
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1769 - 1773
  • [43] Detection defense against adversarial attacks with saliency map
    Ye, Dengpan
    Chen, Chuanxi
    Liu, Changrui
    Wang, Hao
    Jiang, Shunzhi
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10193 - 10210
  • [44] Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks
    Li, Xiang
    Ji, Shihao
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 191 - 207
  • [45] DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES OF ASV
    Wu, Haibin
    Liu, Songxiang
    Meng, Helen
    Lee, Hung-yi
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6564 - 6568
  • [46] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    [J]. INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [47] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen, Jin-Yin
    Wu, Chang-An
    Zheng, Hai-Bin
    Wang, Wei
    Wen, Hao
    [J]. Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [48] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529
  • [49] Adversarial Machine Learning Attacks against Intrusion Detection Systems: A Survey on Strategies and Defense
    Alotaibi, Afnan
    Rassam, Murad A.
    [J]. FUTURE INTERNET, 2023, 15 (02)
  • [50] Novel Evasion Attacks Against Adversarial Training Defense for Smart Grid Federated Learning
    Bondok, Atef H.
    Mahmoud, Mohamed
    Badr, Mahmoud M.
    Fouda, Mostafa M.
    Abdallah, Mohamed
    Alsabaan, Maazen
    [J]. IEEE ACCESS, 2023, 11 : 112953 - 112972