Fast Gradient Scaled Method for Generating Adversarial Examples

被引:0
|
作者
Xu, Zhefeng [1 ]
Luo, Zhijian [1 ]
Mu, Jinlong [1 ]
机构
[1] Hunan Inst Traff Engn, Hengyang, Hunan, Peoples R China
关键词
adversarial examples; FGSM; FGScaledM; adversarial perturbations;
D O I
10.1145/3529466.3529497
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.
引用
下载
收藏
页码:189 / 193
页数:5
相关论文
共 50 条
  • [21] Generating adversarial examples with collaborative generative models
    Lei Xu
    Junhai Zhai
    International Journal of Information Security, 2024, 23 : 1077 - 1091
  • [22] Targeted Adversarial Examples Generating Method Based on cVAE in Black Box Settings
    YU Tingyue
    WANG Shen
    ZHANG Chunrui
    WANG Zhenbang
    LI Yetian
    YU Xiangzhan
    CHINESE JOURNAL OF ELECTRONICS, 2021, 30 (05) : 866 - 875
  • [23] Generating Transferable Adversarial Examples for Speech Classification
    Kim, Hoki
    Park, Jinseong
    Lee, Jaewook
    PATTERN RECOGNITION, 2023, 137
  • [24] Generating adversarial examples with input significance indicator
    Qiu, Xiaofeng
    Zhou, Shuya
    NEUROCOMPUTING, 2020, 394 : 1 - 12
  • [25] Generating Fluent Adversarial Examples for Natural Languages
    Zhang, Huangzhao
    Zhou, Hao
    Miao, Ning
    Li, Lei
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5564 - 5569
  • [26] Generating Adversarial Examples by Adversarial Networks for Semi-supervised Learning
    Ma, Yun
    Mao, Xudong
    Chen, Yangbin
    Li, Qing
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2019, 2019, 11881 : 115 - 129
  • [27] Nesterov Adam Iterative Fast Gradient Method for Adversarial Attacks
    Chen, Cheng
    Wang, Zhiguang
    Fan, Yongnian
    Zhang, Xue
    Li, Dawei
    Lu, Qiang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I, 2022, 13529 : 586 - 598
  • [28] A Unified Gradient Regularization Family for Adversarial Examples
    Lyu, Chunchuan
    Huang, Kaizhu
    Liang, Hai-Ning
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 301 - 309
  • [29] Timing Attack on Random Forests for Generating Adversarial Examples
    Dan, Yuichiro
    Shibahara, Toshiki
    Takahashi, Junko
    ADVANCES IN INFORMATION AND COMPUTER SECURITY (IWSEC 2020), 2020, 12231 : 285 - 302
  • [30] Generating adversarial examples for DNN using pooling layers
    Zhang, Yueling
    Pu, Geguang
    Zhang, Min
    Yang, William
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (04) : 4615 - 4620