Improving the Transferability of Adversarial Examples with Diverse Gradients

被引:2
|
作者
Cao, Yangjie [1 ]
Wang, Haobo [1 ]
Zhu, Chenxi [1 ]
Zhuang, Yan [1 ]
Li, Jie [2 ]
Chen, Xianfu [3 ]
机构
[1] Zhengzhou Univ, Sch Cyber Sci & Engn, Zhengzhou, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[3] VTT Tech Res Ctr Finland, Oulu, Finland
基金
中国国家自然科学基金;
关键词
Adversarial examples; Gradient diversity; Black-box attack; Transferability;
D O I
10.1109/IJCNN54540.2023.10191889
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous works have proven the superior performance of ensemble-based black-box attacks on transferability. However, existing methods require significant difference in architecture among the source models to ensure gradient diversity. In this paper, we propose a Diverse Gradient Method (DGM), verifying that knowledge distillation is able to generate diverse gradients from unchangeable model architecture for boosting transferability. The core idea behind our DGM is to obtain transferable adversarial perturbations by fusing diverse gradients provided by a single source model and its distilled versions through an ensemble strategy. Experimental results show that DGM successfully crafts adversarial examples with higher transferability, only requiring extremely low training cost. Furthermore, our proposed method could be used as a flexible module to improve transferability of most of existing black-box attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [2] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [3] Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input
    Byun, Junyoung
    Cho, Seungju
    Kwon, Myung-Joon
    Kim, Hee-Seon
    Kim, Changick
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15223 - 15232
  • [4] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer
    Ge, Zhijin
    Shang, Fanhua
    Liu, Hongying
    Liu, Yuanyuan
    Wan, Liang
    Feng, Wei
    Wang, Xiaosen
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4440 - 4449
  • [5] Improving the transferability of adversarial examples through neighborhood attribution
    Ke, Wuping
    Zheng, Desheng
    Li, Xiaoyu
    He, Yuanhang
    Li, Tianyu
    Min, Fan
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [6] Improving the transferability of adversarial examples via direction tuning
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Yang, Xinyu
    Zhao, Peng
    INFORMATION SCIENCES, 2023, 647
  • [7] Improving the transferability of adversarial examples with separable positive and negative disturbances
    Yan, Yuanjie
    Bu, Yuxuan
    Shen, Furao
    Zhao, Jian
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (07): : 3725 - 3736
  • [8] FDT: Improving the transferability of adversarial examples with frequency domain transformation
    Ling, Jie
    Chen, Jinhui
    Li, Honglei
    COMPUTERS & SECURITY, 2024, 144
  • [9] Improving the transferability of adversarial examples with separable positive and negative disturbances
    Yuanjie Yan
    Yuxuan Bu
    Furao Shen
    Jian Zhao
    Neural Computing and Applications, 2024, 36 : 3725 - 3736
  • [10] Improving transferability of adversarial examples by saliency distribution and data augmentation
    Dong, Yansong
    Tang, Long
    Tian, Cong
    Yu, Bin
    Duan, Zhenhua
    COMPUTERS & SECURITY, 2022, 120