Fast Gradient Scaled Method for Generating Adversarial Examples

被引:0
|
作者
Xu, Zhefeng [1 ]
Luo, Zhijian [1 ]
Mu, Jinlong [1 ]
机构
[1] Hunan Inst Traff Engn, Hengyang, Hunan, Peoples R China
关键词
adversarial examples; FGSM; FGScaledM; adversarial perturbations;
D O I
10.1145/3529466.3529497
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.
引用
下载
收藏
页码:189 / 193
页数:5
相关论文
共 50 条
  • [41] Marginal Attacks of Generating Adversarial Examples for Spam Filtering
    GU Zhaoquan
    XIE Yushun
    HU Weixiong
    YIN Lihua
    HAN Yi
    TIAN Zhihong
    Chinese Journal of Electronics, 2021, 30 (04) : 595 - 602
  • [42] Generating Audio Adversarial Examples with Ensemble Substituted Models
    Zhang, Yun
    Li, Hongwei
    Xu, Guowen
    Luo, Xizhao
    Dong, Guishan
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [43] MESDeceiver: Efficiently Generating Natural Language Adversarial Examples
    Zhao, Tengfei
    Ge, Zhaocheng
    Hu, Hanping
    Shi, Dingmeng
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] Generating Distributional Adversarial Examples to Evade Statistical Detectors
    Kaya, Yigitcan
    Zafar, Muhammad Bilal
    Aydore, Sergul
    Rauschmayr, Nathalie
    Kenthapadi, Krishnaram
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [45] Generating adversarial examples without specifying a target model
    Yang, Gaoming
    Li, Mingwei
    Fang, Xianjing
    Zhang, Ji
    Liang, Xingzhu
    PEERJ COMPUTER SCIENCE, 2021, 7
  • [46] Generating Robust Audio Adversarial Examples with Temporal Dependency
    Zhang, Hongting
    Zhou, Pan
    Yan, Qiben
    Liu, Xiao-Yang
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3167 - 3173
  • [47] Generating Adversarial Examples through Latent Space Exploration of Generative Adversarial Networks
    Clare, Luana
    Correia, Joao
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, : 1760 - 1767
  • [48] Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method
    Syed Muhammad Ali Naqvi
    Mohammad Shabaz
    Muhammad Attique Khan
    Syeda Iqra Hassan
    Journal of Grid Computing, 2023, 21
  • [49] Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method
    Naqvi, Syed Muhammad Ali
    Shabaz, Mohammad
    Khan, Muhammad Attique
    Hassan, Syeda Iqra
    JOURNAL OF GRID COMPUTING, 2023, 21 (04)
  • [50] Fast and Accurate Detection of Audio Adversarial Examples
    Huang, Po-Hao
    Lan, Yung-Yuan
    Harriman, Wilbert
    Chiuwanara, Venesia
    Wang, Ting-Chi
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,