Generating Adversarial Examples with Adversarial Networks

被引:0
|
作者
Xiao, Chaowei [1 ]
Li, Bo [2 ]
Zhu, Jun-Yan [2 ,3 ]
He, Warren [2 ]
Liu, Mingyan [1 ]
Song, Dawn [2 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
[3] MIT, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.(1)
引用
下载
收藏
页码:3905 / 3911
页数:7
相关论文
共 50 条
  • [31] AdvCGAN: An Elastic and Covert Adversarial Examples Generating Framework
    Wang, Baoli
    Fan, Xinxin
    Jing, Quanliang
    Tan, Haining
    Bi, Jingping
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [32] Marginal Attacks of Generating Adversarial Examples for Spam Filtering
    GU Zhaoquan
    XIE Yushun
    HU Weixiong
    YIN Lihua
    HAN Yi
    TIAN Zhihong
    Chinese Journal of Electronics, 2021, 30 (04) : 595 - 602
  • [33] Generating Audio Adversarial Examples with Ensemble Substituted Models
    Zhang, Yun
    Li, Hongwei
    Xu, Guowen
    Luo, Xizhao
    Dong, Guishan
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [34] Fast Gradient Scaled Method for Generating Adversarial Examples
    Xu, Zhefeng
    Luo, Zhijian
    Mu, Jinlong
    6TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE, ICIAI2022, 2022, : 189 - 193
  • [35] MESDeceiver: Efficiently Generating Natural Language Adversarial Examples
    Zhao, Tengfei
    Ge, Zhaocheng
    Hu, Hanping
    Shi, Dingmeng
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [36] Generating Distributional Adversarial Examples to Evade Statistical Detectors
    Kaya, Yigitcan
    Zafar, Muhammad Bilal
    Aydore, Sergul
    Rauschmayr, Nathalie
    Kenthapadi, Krishnaram
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [37] Generating adversarial examples without specifying a target model
    Yang, Gaoming
    Li, Mingwei
    Fang, Xianjing
    Zhang, Ji
    Liang, Xingzhu
    PEERJ COMPUTER SCIENCE, 2021, 7
  • [38] Generating Robust Audio Adversarial Examples with Temporal Dependency
    Zhang, Hongting
    Zhou, Pan
    Yan, Qiben
    Liu, Xiao-Yang
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3167 - 3173
  • [39] Generating adversarial examples with elastic-net regularized boundary equilibrium generative adversarial network
    Hu, Cong
    Wu, Xiao-Jun
    Li, Zuo-Yong
    PATTERN RECOGNITION LETTERS, 2020, 140 (140) : 281 - 287
  • [40] Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual information
    Zhang J.
    Qian W.
    Cao J.
    Xu D.
    Neural Computing and Applications, 2024, 36 (23) : 14379 - 14394