Improving transferability of adversarial examples with powerful affine-shear transformation attack

被引:5
|
作者
Wang, Xiaotong [1 ]
Huang, Chunguang [1 ]
Cheng, Hai [1 ]
机构
[1] Heilongjiang Univ, Coll Elect Engn, Harbin 150000, Peoples R China
关键词
Deep neural networks; Adversarial examples generation; Black-box attacks; Transferability; Network security;
D O I
10.1016/j.csi.2022.103693
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Image classification models based on deep neural networks have made great improvements on various tasks, but they are still vulnerable to adversarial examples that could increase the possibility of misclassification. Various methods are proposed to generate adversarial examples under white-box attack circumstances that have achieved a high success rate. However, most existing adversarial attacks only achieve poor transferability when attacking other unknown models with the black-box scenario settings. In this paper, we propose a new method that generates adversarial examples based on affine-shear transformation from the perspective of deep model input layers and maximizes the loss function during each iteration. This method could improve the transferability and the input diversity of adversarial examples, and we also optimize the above adversarial examples generation process with Nesterov accelerated gradient. Extensive experiments on ImageNet Dataset indicate that our proposed method could exhibit higher transferability and achieve higher attack success rates on both single model settings and ensemble-model settings. It can also combine with other gradient-based methods and image transformation-based methods to further build more powerful attacks.
引用
收藏
页数:10
相关论文
共 33 条
  • [1] FDT: Improving the transferability of adversarial examples with frequency domain transformation
    Ling, Jie
    Chen, Jinhui
    Li, Honglei
    COMPUTERS & SECURITY, 2024, 144
  • [2] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [3] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [4] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [5] Improving the Transferability of Adversarial Examples with Diverse Gradients
    Cao, Yangjie
    Wang, Haobo
    Zhu, Chenxi
    Zhuang, Yan
    Li, Jie
    Chen, Xianfu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] Improving the adversarial transferability with relational graphs ensemble adversarial attack
    Pi, Jiatian
    Luo, Chaoyang
    Xia, Fen
    Jiang, Ning
    Wu, Haiying
    Wu, Zhiyou
    FRONTIERS IN NEUROSCIENCE, 2023, 16
  • [7] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer
    Ge, Zhijin
    Shang, Fanhua
    Liu, Hongying
    Liu, Yuanyuan
    Wan, Liang
    Feng, Wei
    Wang, Xiaosen
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4440 - 4449
  • [8] Improving the transferability of adversarial examples through neighborhood attribution
    Ke, Wuping
    Zheng, Desheng
    Li, Xiaoyu
    He, Yuanhang
    Li, Tianyu
    Min, Fan
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [9] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67
  • [10] REGULARIZED INTERMEDIATE LAYERS ATTACK: ADVERSARIAL EXAMPLES WITH HIGH TRANSFERABILITY
    Li, Xiaorui
    Cui, Weiyu
    Huang, Jiawei
    Wang, Wenyi
    Chen, Jianwen
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1904 - 1908