Generalizing universal adversarial perturbations for deep neural networks

被引:5
|
作者
Zhang, Yanghao [1 ]
Ruan, Wenjie [1 ]
Wang, Fu [1 ]
Huang, Xiaowei [2 ]
机构
[1] Univ Exeter, Coll Engn Math & Phys Sci, Exeter EX4 4QF, England
[2] Univ Liverpool, Dept Comp Sci, Liverpool L69 3BX, England
基金
英国工程与自然科学研究理事会;
关键词
Deep learning; Adversarial examples; Security; Deep neural networks;
D O I
10.1007/s10994-023-06306-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous studies have shown that universal adversarial attacks can fool deep neural networks over a large set of input images with a single human-invisible perturbation. However, current methods for universal adversarial attacks are based on additive perturbation, which enables misclassification by directly adding the perturbation on the input images. In this paper, for the first time, we show that a universal adversarial attack can also be achieved through spatial transformation (non-additive). More importantly, to unify both additive and non-additive perturbations, we propose a novel unified yet flexible framework for universal adversarial attacks, called GUAP, which can initiate attacks by l(8)-norm (additive) perturbation, spatially-transformed (non-additive) perturbation, or a combination of both. Extensive experiments are conducted on two computer vision scenarios, including image classification and semantic segmentation tasks, which contain CIFAR-10, ImageNet and Cityscapes datasets with a number of different deep neural network models, including GoogLeNet, VGG16/19, ResNet101/152, DenseNet121, and FCN-8s. Empirical experiments demonstrate that GUAP can obtain higher attack success rates on these datasets compared to state-of-the-art universal adversarial attacks. In addition, we also demonstrate how universal adversarial training benefits the robustness of the model against universal attacks. We release our tool GUAP on https://github.com/TrustAI/GUAP.
引用
收藏
页码:1597 / 1626
页数:30
相关论文
共 50 条
  • [31] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [32] Disrupting adversarial transferability in deep neural networks
    Wiedeman, Christopher
    Wang, Ge
    PATTERNS, 2022, 3 (05):
  • [33] Adversarial image detection in deep neural networks
    Fabio Carrara
    Fabrizio Falchi
    Roberto Caldelli
    Giuseppe Amato
    Rudy Becarelli
    Multimedia Tools and Applications, 2019, 78 : 2815 - 2835
  • [34] Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards a Fourier Perspective
    Zhang, Chaoning
    Benz, Philipp
    Karjauv, Adil
    Kweon, In So
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3296 - 3304
  • [35] A Neural Rejection System Against Universal Adversarial Perturbations in Radio Signal Classification
    Zhang, Lu
    Lambotharan, Sangarapillai
    Zheng, Gan
    Roli, Fabio
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [36] Adversarial Perturbation Defense on Deep Neural Networks
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    ACM COMPUTING SURVEYS, 2021, 54 (08)
  • [37] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [38] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [39] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [40] Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification
    Koga, Kazuki
    Takemoto, Kazuhiro
    ALGORITHMS, 2022, 15 (05)