Fast Gradient Scaled Method for Generating Adversarial Examples

被引:0
|
作者
Xu, Zhefeng [1 ]
Luo, Zhijian [1 ]
Mu, Jinlong [1 ]
机构
[1] Hunan Inst Traff Engn, Hengyang, Hunan, Peoples R China
关键词
adversarial examples; FGSM; FGScaledM; adversarial perturbations;
D O I
10.1145/3529466.3529497
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.
引用
下载
收藏
页码:189 / 193
页数:5
相关论文
共 50 条
  • [1] Generate adversarial examples by adaptive moment iterative fast gradient sign method
    Zhang, Jiebao
    Qian, Wenhua
    Nie, Rencan
    Cao, Jinde
    Xu, Dan
    APPLIED INTELLIGENCE, 2023, 53 (01) : 1101 - 1114
  • [2] Generate adversarial examples by adaptive moment iterative fast gradient sign method
    Jiebao Zhang
    Wenhua Qian
    Rencan Nie
    Jinde Cao
    Dan Xu
    Applied Intelligence, 2023, 53 : 1101 - 1114
  • [3] Generate Adversarial Examples by Nesterov-momentum Iterative Fast Gradient Sign Method
    Xu, Jin
    PROCEEDINGS OF 2020 IEEE 11TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS 2020), 2020, : 244 - 249
  • [4] Generating Adversarial Examples with Adversarial Networks
    Xiao, Chaowei
    Li, Bo
    Zhu, Jun-Yan
    He, Warren
    Liu, Mingyan
    Song, Dawn
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3905 - 3911
  • [5] Fast Local Attack: Generating Local Adversarial Examples for Object Detectors
    Liao, Quanyu
    Wang, Xin
    Kong, Bin
    Lyu, Siwei
    Yin, Youbing
    Song, Qi
    Wu, Xi
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] Survey on Generating Adversarial Examples
    Pan W.-W.
    Wang X.-Y.
    Song M.-L.
    Chen C.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (01): : 67 - 81
  • [7] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282
  • [8] Hardening against adversarial examples with the smooth gradient method
    Alan Mosca
    George D. Magoulas
    Soft Computing, 2018, 22 : 3203 - 3213
  • [9] Hardening against adversarial examples with the smooth gradient method
    Mosca, Alan
    Magoulas, George D.
    SOFT COMPUTING, 2018, 22 (10) : 3203 - 3213
  • [10] Generating Adversarial Examples With Conditional Generative Adversarial Net
    Yu, Ping
    Song, Kaitao
    Lu, Jianfeng
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 676 - 681