Improving transferability of adversarial examples with powerful affine-shear transformation attack

被引:5
|
作者
Wang, Xiaotong [1 ]
Huang, Chunguang [1 ]
Cheng, Hai [1 ]
机构
[1] Heilongjiang Univ, Coll Elect Engn, Harbin 150000, Peoples R China
关键词
Deep neural networks; Adversarial examples generation; Black-box attacks; Transferability; Network security;
D O I
10.1016/j.csi.2022.103693
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Image classification models based on deep neural networks have made great improvements on various tasks, but they are still vulnerable to adversarial examples that could increase the possibility of misclassification. Various methods are proposed to generate adversarial examples under white-box attack circumstances that have achieved a high success rate. However, most existing adversarial attacks only achieve poor transferability when attacking other unknown models with the black-box scenario settings. In this paper, we propose a new method that generates adversarial examples based on affine-shear transformation from the perspective of deep model input layers and maximizes the loss function during each iteration. This method could improve the transferability and the input diversity of adversarial examples, and we also optimize the above adversarial examples generation process with Nesterov accelerated gradient. Extensive experiments on ImageNet Dataset indicate that our proposed method could exhibit higher transferability and achieve higher attack success rates on both single model settings and ensemble-model settings. It can also combine with other gradient-based methods and image transformation-based methods to further build more powerful attacks.
引用
收藏
页数:10
相关论文
共 33 条
  • [31] Hierarchical feature transformation attack: Generate transferable adversarial examples for face recognition
    Li, Yuanbo
    Hu, Cong
    Wang, Rui
    Wu, Xiaojun
    APPLIED SOFT COMPUTING, 2025, 172
  • [32] Improving Transferability of Adversarial Samples via Critical Region-Oriented Feature-Level Attack
    Li, Zhiwei
    Ren, Min
    Li, Qi
    Jiang, Fangling
    Sun, Zhenan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6650 - 6664
  • [33] Attention-guided transformation-invariant attack for black-box adversarial examples
    Zhu, Jiaqi
    Dai, Feng
    Yu, Lingyun
    Xie, Hongtao
    Wang, Lidong
    Wu, Bo
    Zhang, Yongdong
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (05) : 3142 - 3165