共 33 条
Improving transferability of adversarial examples with powerful affine-shear transformation attack
被引:5
|作者:
Wang, Xiaotong
[1
]
Huang, Chunguang
[1
]
Cheng, Hai
[1
]
机构:
[1] Heilongjiang Univ, Coll Elect Engn, Harbin 150000, Peoples R China
关键词:
Deep neural networks;
Adversarial examples generation;
Black-box attacks;
Transferability;
Network security;
D O I:
10.1016/j.csi.2022.103693
中图分类号:
TP3 [计算技术、计算机技术];
学科分类号:
0812 ;
摘要:
Image classification models based on deep neural networks have made great improvements on various tasks, but they are still vulnerable to adversarial examples that could increase the possibility of misclassification. Various methods are proposed to generate adversarial examples under white-box attack circumstances that have achieved a high success rate. However, most existing adversarial attacks only achieve poor transferability when attacking other unknown models with the black-box scenario settings. In this paper, we propose a new method that generates adversarial examples based on affine-shear transformation from the perspective of deep model input layers and maximizes the loss function during each iteration. This method could improve the transferability and the input diversity of adversarial examples, and we also optimize the above adversarial examples generation process with Nesterov accelerated gradient. Extensive experiments on ImageNet Dataset indicate that our proposed method could exhibit higher transferability and achieve higher attack success rates on both single model settings and ensemble-model settings. It can also combine with other gradient-based methods and image transformation-based methods to further build more powerful attacks.
引用
收藏
页数:10
相关论文