Saliency Map-Based Local White-Box Adversarial Attack Against Deep Neural Networks

被引:1
|
作者
Liu, Haohan [1 ,2 ]
Zuo, Xingquan [1 ,2 ]
Huang, Hai [1 ]
Wan, Xing [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Comp Sci, Beijing, Peoples R China
[2] Minist Educ, Key Lab Trustworthy Distributed Comp & Serv, Beijing, Peoples R China
来源
关键词
Deep learning; Saliency map; Local white-box attack; Adversarial attack;
D O I
10.1007/978-3-031-20500-2_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The current deep neural networks (DNN) are easily fooled by adversarial examples, which are generated by adding some small, well-designed and human-imperceptible perturbations to clean examples. Adversarial examples will mislead deep learning (DL) model to make wrong predictions. At present, many existing white-box attack methods in the image field are mainly based on the global gradient of the model. That is, the global gradient is first calculated, and then the perturbation is added into the gradient direction. Those methods usually have a high attack success rate. However, there are also some shortcomings, such as excessive perturbation and easy detection by the human's eye. Therefore, in this paper we propose a SaliencyMap-based Local white-box Adversarial Attack method (SMLAA). The saliencymap used in the interpretability of artificial intelligence is introduced in SMLAA. First, Gradient-weighted Class Activation Mapping (Grad-CAM) is utilized to provide a visual interpretation of model decisions to find important areas in an image. Then, the perturbation is added only to important local areas to reduce the magnitude of perturbations. Experimental results show that compared with the global attack method, SMLAA reduces the average robustness measure by 9%-24% while ensuring the attack success rate. It means that SMLAA has a high attack success rate with fewer pixels changed.
引用
收藏
页码:3 / 14
页数:12
相关论文
共 50 条
  • [41] Electromagnetic signal fast adversarial attack method based on Jacobian saliency map
    Zhang J.
    Zhou X.
    Zhang Y.
    Wang Z.
    Tongxin Xuebao/Journal on Communications, 2024, 45 (01): : 180 - 193
  • [42] White-Box Target Attack for EEG-Based BCI Regression Problems
    Meng, Lubin
    Lin, Chin-Teng
    Jung, Tzyy-Ping
    Wu, Dongrui
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I, 2019, 11953 : 476 - 488
  • [43] FOSTERING THE ROBUSTNESS OF WHITE-BOX DEEP NEURAL NETWORK WATERMARKS BY NEURON ALIGNMENT
    Li, Fang-Qi
    Wang, Shi-Lin
    Zhu, Yun
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3049 - 3053
  • [44] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [45] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [46] Cryptanalysis of a white-box SM4 implementation based on collision attack
    Wang, Rusi
    Guo, Hua
    Lu, Jiqiang
    Liu, Jianwei
    IET INFORMATION SECURITY, 2021, : 18 - 27
  • [47] Cryptanalysis of a white-box SM4 implementation based on collision attack
    Wang, Rusi
    Guo, Hua
    Lu, Jiqiang
    Liu, Jianwei
    IET Information Security, 2022, 16 (01) : 18 - 27
  • [48] Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition
    Wang, Hanrui
    Wang, Shuo
    Jin, Zhe
    Wang, Yandan
    Chen, Cunjian
    Tistarelli, Massimo
    2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021), 2021,
  • [49] Mutation-Based White Box Testing of Deep Neural Networks
    Cetiner, Gokhan
    Yayan, Ugur
    Yazici, Ahmet
    IEEE ACCESS, 2024, 12 : 160156 - 160174
  • [50] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505