Improving the Transferability of Adversarial Examples with Diverse Gradients

被引:2
|
作者
Cao, Yangjie [1 ]
Wang, Haobo [1 ]
Zhu, Chenxi [1 ]
Zhuang, Yan [1 ]
Li, Jie [2 ]
Chen, Xianfu [3 ]
机构
[1] Zhengzhou Univ, Sch Cyber Sci & Engn, Zhengzhou, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[3] VTT Tech Res Ctr Finland, Oulu, Finland
基金
中国国家自然科学基金;
关键词
Adversarial examples; Gradient diversity; Black-box attack; Transferability;
D O I
10.1109/IJCNN54540.2023.10191889
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous works have proven the superior performance of ensemble-based black-box attacks on transferability. However, existing methods require significant difference in architecture among the source models to ensure gradient diversity. In this paper, we propose a Diverse Gradient Method (DGM), verifying that knowledge distillation is able to generate diverse gradients from unchangeable model architecture for boosting transferability. The core idea behind our DGM is to obtain transferable adversarial perturbations by fusing diverse gradients provided by a single source model and its distilled versions through an ensemble strategy. Experimental results show that DGM successfully crafts adversarial examples with higher transferability, only requiring extremely low training cost. Furthermore, our proposed method could be used as a flexible module to improve transferability of most of existing black-box attacks.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] IMPROVING ADVERSARIAL TRANSFERABILITY VIA FEATURE TRANSLATION
    Kim, Yoonji
    Cho, Seungju
    Byun, Junyoung
    Kwon, Myung-Joon
    Kim, Changick
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3359 - 3363
  • [32] Improving the transferability of adversarial samples with channel switching
    Ling, Jie
    Chen, Xiaohuan
    Luo, Yu
    APPLIED INTELLIGENCE, 2023, 53 (24) : 30580 - 30592
  • [33] Improving the transferability of adversarial samples with channel switching
    Jie Ling
    Xiaohuan Chen
    Yu Luo
    Applied Intelligence, 2023, 53 : 30580 - 30592
  • [34] Improving Adversarial Transferability via Model Alignment
    Ma, Avery
    Farahmand, Amir-Massoud
    Pan, Yangchen
    Torr, Philip
    Gu, Jindong
    COMPUTER VISION - ECCV 2024, PT LXII, 2025, 15120 : 74 - 92
  • [35] Enhancing Transferability of Adversarial Examples by Successively Attacking Multiple Models
    Zhang, Xiaolin
    Zhang, Wenwen
    Liu, Lixin
    Wang, Yongping
    Gao, Lu
    Zhang, Shuai
    International Journal of Network Security, 2023, 25 (02) : 306 - 316
  • [36] DeT: Defending Against Adversarial Examples via Decreasing Transferability
    Li, Changjiang
    Weng, Haiqin
    Ji, Shouling
    Dong, Jianfeng
    He, Qinming
    CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 307 - 322
  • [37] REGULARIZED INTERMEDIATE LAYERS ATTACK: ADVERSARIAL EXAMPLES WITH HIGH TRANSFERABILITY
    Li, Xiaorui
    Cui, Weiyu
    Huang, Jiawei
    Wang, Wenyi
    Chen, Jianwen
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1904 - 1908
  • [38] Boosting the Transferability of Video Adversarial Examples via Temporal Translation
    Wei, Zhipeng
    Chen, Jingjing
    Wu, Zuxuan
    Jiang, Yu-Gang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2659 - 2667
  • [39] Boosting the transferability of adversarial examples via stochastic serial attack
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Tang, Xue-song
    NEURAL NETWORKS, 2022, 150 : 58 - 67
  • [40] Assessing Transferability of Adversarial Examples against Malware Detection Classifiers
    Wang, Yixiang
    Liu, Jiqiang
    Chang, Xiaolin
    CF '19 - PROCEEDINGS OF THE 16TH ACM INTERNATIONAL CONFERENCE ON COMPUTING FRONTIERS, 2019, : 211 - 214