Adversarial example detection by predicting adversarial noise in the frequency domain

被引:1
|
作者
Jung, Seunghwan [1 ]
Chung, Minyoung [2 ]
Shin, Yeong-Gil [1 ]
机构
[1] Seoul Natl Univ, Dept Comp Sci & Engn, 1 Gwanak Ro, Seoul 08826, South Korea
[2] Soongsil Univ, Sch Software, 369 Sangdo Ro, Seoul 06978, South Korea
关键词
Adversarial example detection; Adversarial noise prediction; Frequency domain classification; Prediction-based adversarial detection;
D O I
10.1007/s11042-023-14608-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advances in deep neural network (DNN) techniques have increased the importance of security and robustness of algorithms where DNNs are applied. However, several studies have demonstrated that neural networks are vulnerable to adversarial examples, which are generated by adding crafted adversarial noises to the input images. Because the adversarial noises are typically imperceptible to the human eye, it is difficult to defend DNNs. One method of defense is the detection of adversarial examples by analyzing characteristics of input images. Recent studies have used the hidden layer outputs of the target classifier to improve the robustness but need to access the target classifier. Moreover, there is no post-processing step for the detected adversarial examples. They simply discard the detected adversarial images. To resolve this problem, we propose a novel detection-based method, which predicts the adversarial noise and detects the adversarial example based on the predicted noise without any target classification information. We first generated adversarial examples and adversarial noises, which can be obtained from the residual between the original and adversarial example images. Subsequently, we trained the proposed adversarial noise predictor to estimate the adversarial noise image and trained the adversarial detector using the input images and the predicted noises. The proposed framework has the advantage that it is agnostic to the input image modality. Moreover, the predicted noises can be used to reconstruct the detected adversarial examples as the non-adversarial images instead of discarding the detected adversarial examples. We tested our proposed method against the fast gradient sign method (FGSM), basic iterative method (BIM), projected gradient descent (PGD), Deepfool, and Carlini & Wagner adversarial attack methods on the CIFAR-10 and CIFAR-100 datasets provided by the Canadian Institute for Advanced Research (CIFAR). Our method demonstrated significant improvements in detection accuracy when compared to the state-of-the-art methods and resolved the wastage problem of the detected adversarial examples. The proposed method agnostic to the input image modality demonstrated that the noise predictor successfully captured noise in the Fourier domain and improved the performance of the detection task. Moreover, we resolved the post-processing problem of the detected adversarial examples with the reconstruction process using the predicted noise.
引用
收藏
页码:25235 / 25251
页数:17
相关论文
共 50 条
  • [21] KfreqGAN: Unsupervised detection of sequence anomaly with adversarial learning and frequency domain information
    Yao, Yueyue
    Ma, Jianghong
    Ye, Yunming
    Knowledge-Based Systems, 2022, 236
  • [22] KfreqGAN: Unsupervised detection of sequence anomaly with adversarial learning and frequency domain information
    Yao, Yueyue
    Ma, Jianghong
    Ye, Yunming
    KNOWLEDGE-BASED SYSTEMS, 2022, 236
  • [23] Model-agnostic adversarial example detection via high-frequency amplification
    Li, Qiao
    Chen, Jing
    He, Kun
    Zhang, Zijun
    Du, Ruiying
    She, Jisi
    Wang, Xinxin
    COMPUTERS & SECURITY, 2024, 141
  • [24] MANDA: On Adversarial Example Detection for Network Intrusion Detection System
    Wang, Ning
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [25] Adversarial Domain Adaptation for Duplicate Question Detection
    Shah, Darsh J.
    Lei, Tao
    Moschitti, Alessandro
    Romeo, Salvatore
    Nakov, Preslav
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 1056 - 1063
  • [26] MANDA: On Adversarial Example Detection for Network Intrusion Detection System
    Wang, Ning
    Chen, Yimin
    Xiao, Yang
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (02) : 1139 - 1153
  • [27] Adversarial Example Games
    Bose, Avishek Joey
    Gidel, Gauthier
    Berard, Hugo
    Cianflone, Andre
    Vincent, Pascal
    Lacoste-Julien, Simon
    Hamilton, William L.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [28] The Defense of Adversarial Example with Conditional Generative Adversarial Networks
    Yu, Fangchao
    Wang, Li
    Fang, Xianjin
    Zhang, Youwen
    SECURITY AND COMMUNICATION NETWORKS, 2020, 2020
  • [29] Feature Fusion Based Adversarial Example Detection Against Second-Round Adversarial Attacks
    Qin C.
    Chen Y.
    Chen K.
    Dong X.
    Zhang W.
    Mao X.
    He Y.
    Yu N.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (05): : 1029 - 1040
  • [30] Adversarial Example Detection with Latent Representation Dynamic Prototype
    Wang, Taowen
    Qian, Zhuang
    Yang, Xi
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 525 - 536