Generalizing universal adversarial perturbations for deep neural networks

被引:5
|
作者
Zhang, Yanghao [1 ]
Ruan, Wenjie [1 ]
Wang, Fu [1 ]
Huang, Xiaowei [2 ]
机构
[1] Univ Exeter, Coll Engn Math & Phys Sci, Exeter EX4 4QF, England
[2] Univ Liverpool, Dept Comp Sci, Liverpool L69 3BX, England
基金
英国工程与自然科学研究理事会;
关键词
Deep learning; Adversarial examples; Security; Deep neural networks;
D O I
10.1007/s10994-023-06306-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous studies have shown that universal adversarial attacks can fool deep neural networks over a large set of input images with a single human-invisible perturbation. However, current methods for universal adversarial attacks are based on additive perturbation, which enables misclassification by directly adding the perturbation on the input images. In this paper, for the first time, we show that a universal adversarial attack can also be achieved through spatial transformation (non-additive). More importantly, to unify both additive and non-additive perturbations, we propose a novel unified yet flexible framework for universal adversarial attacks, called GUAP, which can initiate attacks by l(8)-norm (additive) perturbation, spatially-transformed (non-additive) perturbation, or a combination of both. Extensive experiments are conducted on two computer vision scenarios, including image classification and semantic segmentation tasks, which contain CIFAR-10, ImageNet and Cityscapes datasets with a number of different deep neural network models, including GoogLeNet, VGG16/19, ResNet101/152, DenseNet121, and FCN-8s. Empirical experiments demonstrate that GUAP can obtain higher attack success rates on these datasets compared to state-of-the-art universal adversarial attacks. In addition, we also demonstrate how universal adversarial training benefits the robustness of the model against universal attacks. We release our tool GUAP on https://github.com/TrustAI/GUAP.
引用
收藏
页码:1597 / 1626
页数:30
相关论文
共 50 条
  • [21] Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations
    Xiao, Yatie
    Pun, Chi-Man
    INFORMATION SCIENCES, 2021, 571 : 104 - 132
  • [22] Targeted Universal Adversarial Attack on Deep Hash Networks
    Meng, Fanlei
    Chen, Xiangru
    Cao, Yuan
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 165 - 174
  • [23] Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations
    Hsiung, Lei
    Tsai, Yun-Yun
    Chen, Pin-Yu
    Ho, Tsung-Yi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24658 - 24667
  • [24] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [25] Defense against Universal Adversarial Perturbations
    Akhtar, Naveed
    Liu, Jian
    Mian, Ajmal
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3389 - 3398
  • [26] Universal adversarial perturbations generative network
    Zheng Wang
    Yang Yang
    Jingjing Li
    Xiaofeng Zhu
    World Wide Web, 2022, 25 : 1725 - 1746
  • [27] Universal adversarial perturbations generative network
    Wang, Zheng
    Yang, Yang
    Li, Jingjing
    Zhu, Xiaofeng
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1725 - 1746
  • [28] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35
  • [29] Adversarial image detection in deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Becarelli, Rudy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (03) : 2815 - 2835
  • [30] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)