Improving the Transferability of Adversarial Examples with Diverse Gradients

被引:2
|
作者
Cao, Yangjie [1 ]
Wang, Haobo [1 ]
Zhu, Chenxi [1 ]
Zhuang, Yan [1 ]
Li, Jie [2 ]
Chen, Xianfu [3 ]
机构
[1] Zhengzhou Univ, Sch Cyber Sci & Engn, Zhengzhou, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[3] VTT Tech Res Ctr Finland, Oulu, Finland
基金
中国国家自然科学基金;
关键词
Adversarial examples; Gradient diversity; Black-box attack; Transferability;
D O I
10.1109/IJCNN54540.2023.10191889
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous works have proven the superior performance of ensemble-based black-box attacks on transferability. However, existing methods require significant difference in architecture among the source models to ensure gradient diversity. In this paper, we propose a Diverse Gradient Method (DGM), verifying that knowledge distillation is able to generate diverse gradients from unchangeable model architecture for boosting transferability. The core idea behind our DGM is to obtain transferable adversarial perturbations by fusing diverse gradients provided by a single source model and its distilled versions through an ensemble strategy. Experimental results show that DGM successfully crafts adversarial examples with higher transferability, only requiring extremely low training cost. Furthermore, our proposed method could be used as a flexible module to improve transferability of most of existing black-box attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Improving transferability of adversarial examples via statistical attribution-based attacks
    Zhu, Hegui
    Jia, Yanmeng
    Yan, Yue
    Yang, Ze
    NEURAL NETWORKS, 2025, 187
  • [22] Improving the transferability of adversarial examples through black-box feature attacks
    Wang, Maoyuan
    Wang, Jinwei
    Ma, Bin
    Luo, Xiangyang
    NEUROCOMPUTING, 2024, 595
  • [23] Improving the Transferability of Adversarial Samples with Adversarial Transformations
    Wu, Weibin
    Su, Yuxin
    Lyu, Michael R.
    King, Irwin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9020 - 9029
  • [24] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [25] Enhancing Transferability of Adversarial Examples with Spatial Momentum
    Wang, Guoqiu
    Yan, Huanqian
    Wei, Xingxing
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 593 - 604
  • [26] Enhancing the transferability of adversarial examples on vision transformers
    Guan, Yujiao
    Yang, Haoyu
    Qu, Xiaotong
    Wang, Xiaodong
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [27] Improving the transferability of adversarial examples via the high-level interpretable features for object detection
    Zhiyi Ding
    Lei Sun
    Xiuqing Mao
    Leyu Dai
    Ruiyang Ding
    The Journal of Supercomputing, 81 (6)
  • [28] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282
  • [29] Improving the adversarial transferability with relational graphs ensemble adversarial attack
    Pi, Jiatian
    Luo, Chaoyang
    Xia, Fen
    Jiang, Ning
    Wu, Haiying
    Wu, Zhiyou
    FRONTIERS IN NEUROSCIENCE, 2023, 16
  • [30] Improving adversarial transferability through hybrid augmentation
    Zhu, Peican
    Fan, Zepeng
    Guo, Sensen
    Tang, Keke
    Li, Xingyu
    COMPUTERS & SECURITY, 2024, 139