Defense against adversarial examples based on wavelet domain analysis

被引:5
|
作者
Sarvar, Armaghan [1 ]
Amirmazlaghani, Maryam [1 ]
机构
[1] Amirkabir Univ Technol, Dept Comp Engn, Tehran, Iran
关键词
Deep learning; Adversarial examples; Adversarial detection; Input data reconstruction; Wavelet domain;
D O I
10.1007/s10489-022-03159-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, machine learning and deep learning, in particular, have shown powerful performance on different challenging tasks. However, research has shown that deep learning systems can be vulnerable to malicious inputs modified by perturbations crafted to be imperceptible to humans. These adversarial examples can fool the classifier into misclassifying them with high confidence, limiting the applications of deep learning systems, especially where guaranteeing the security of the learning model is necessary. In this paper, we propose a two-level defense method consisting of adversarial detection and input data reconstruction modules against adversarial attacks. The detector differentiates between normal and adversarial examples fed to a deep image classification model, and the reconstructor transforms the detected adversarial images to their corresponding normal samples. Both detection and reconstruction modules are novel and fast signal processing-based techniques depending on analyzing the attacks in the wavelet domain. We show that our defense method is effective against the most state-of-the-art attacks with neither modifying the protected classifier nor utilizing any deep learning model that could be exposed to attacks itself.
引用
收藏
页码:423 / 439
页数:17
相关论文
共 50 条
  • [1] Defense against adversarial examples based on wavelet domain analysis
    Armaghan Sarvar
    Maryam Amirmazlaghani
    [J]. Applied Intelligence, 2023, 53 : 423 - 439
  • [2] Joint contrastive learning and frequency domain defense against adversarial examples
    Yang, Jin
    Li, Zhi
    Liu, Shuaiwei
    Hong, Bo
    Wang, Weidong
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25): : 18623 - 18639
  • [3] Joint contrastive learning and frequency domain defense against adversarial examples
    Jin Yang
    Zhi Li
    Shuaiwei Liu
    Bo Hong
    Weidong Wang
    [J]. Neural Computing and Applications, 2023, 35 : 18623 - 18639
  • [4] D2Defend: Dual-Domain based Defense against Adversarial Examples
    Yan, Xin
    Li, Yanjie
    Dai, Tao
    Bai, Yang
    Xia, Shu-Tao
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [5] Deep image prior based defense against adversarial examples
    Dai, Tao
    Feng, Yan
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    [J]. PATTERN RECOGNITION, 2022, 122
  • [6] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    [J]. IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [7] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [8] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    [J]. 2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [9] Key-Based Input Transformation Defense Against Adversarial Examples
    Qin, Yi
    Yue, Chuan
    [J]. 2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [10] Advocating for Multiple Defense Strategies Against Adversarial Examples
    Araujo, Alexandre
    Meunier, Laurent
    Pinot, Rafael
    Negrevergne, Benjamin
    [J]. ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 165 - 177