Improving transferability of adversarial examples with powerful affine-shear transformation attack

被引:5
|
作者
Wang, Xiaotong [1 ]
Huang, Chunguang [1 ]
Cheng, Hai [1 ]
机构
[1] Heilongjiang Univ, Coll Elect Engn, Harbin 150000, Peoples R China
关键词
Deep neural networks; Adversarial examples generation; Black-box attacks; Transferability; Network security;
D O I
10.1016/j.csi.2022.103693
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Image classification models based on deep neural networks have made great improvements on various tasks, but they are still vulnerable to adversarial examples that could increase the possibility of misclassification. Various methods are proposed to generate adversarial examples under white-box attack circumstances that have achieved a high success rate. However, most existing adversarial attacks only achieve poor transferability when attacking other unknown models with the black-box scenario settings. In this paper, we propose a new method that generates adversarial examples based on affine-shear transformation from the perspective of deep model input layers and maximizes the loss function during each iteration. This method could improve the transferability and the input diversity of adversarial examples, and we also optimize the above adversarial examples generation process with Nesterov accelerated gradient. Extensive experiments on ImageNet Dataset indicate that our proposed method could exhibit higher transferability and achieve higher attack success rates on both single model settings and ensemble-model settings. It can also combine with other gradient-based methods and image transformation-based methods to further build more powerful attacks.
引用
收藏
页数:10
相关论文
共 33 条
  • [21] Enhance Domain-Invariant Transferability of Adversarial Examples via Distance Metric Attack
    Zhang, Jin
    Peng, Wenyu
    Wang, Ruxin
    Lin, Yu
    Zhou, Wei
    Lan, Ge
    MATHEMATICS, 2022, 10 (08)
  • [22] Boosting the Transferability of Adversarial Examples with Gradient-Aligned Ensemble Attack for Speaker Recognition
    Li, Zhuhai
    Zhang, Jie
    Guo, Wu
    Wu, Haochen
    INTERSPEECH 2024, 2024, : 532 - 536
  • [23] Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
    Pintor, Maura
    Demetrio, Luca
    Sotgiu, Angelo
    Demontis, Ambra
    Carlini, Nicholas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [24] Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing
    Xie, Pengfei
    Shi, Shuhao
    Yang, Shuai
    Qiao, Kai
    Liang, Ningning
    Wang, Linyuan
    Chen, Jian
    Hu, Guoen
    Yan, Bin
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [25] Improving transferability of adversarial examples via statistical attribution-based attacks
    Zhu, Hegui
    Jia, Yanmeng
    Yan, Yue
    Yang, Ze
    NEURAL NETWORKS, 2025, 187
  • [26] Improving the transferability of adversarial examples through black-box feature attacks
    Wang, Maoyuan
    Wang, Jinwei
    Ma, Bin
    Luo, Xiangyang
    NEUROCOMPUTING, 2024, 595
  • [27] Enhancing visual adversarial transferability via affine transformation of intermediate-level perturbations
    Li, Qizhang
    Guo, Yiwen
    Zuo, Wangmeng
    PATTERN RECOGNITION LETTERS, 2025, 191 : 51 - 57
  • [28] Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input
    Byun, Junyoung
    Cho, Seungju
    Kwon, Myung-Joon
    Kim, Hee-Seon
    Kim, Changick
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15223 - 15232
  • [29] Improving transferability of 3D adversarial attacks with scale and shear transformations
    Zhang, Jinlai
    Dong, Yinpeng
    Zhu, Jun
    Zhu, Jihong
    Kuang, Minchi
    Yuan, Xiaming
    INFORMATION SCIENCES, 2024, 662
  • [30] Improving the transferability of adversarial examples via the high-level interpretable features for object detection
    Zhiyi Ding
    Lei Sun
    Xiuqing Mao
    Leyu Dai
    Ruiyang Ding
    The Journal of Supercomputing, 81 (6)