Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization

被引:23
|
作者
Zhang, Jianping [1 ]
Huang, Yizhan [1 ]
Wu, Weibin [2 ]
Lyu, Michael R. [1 ]
机构
[1] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Sun Yat Sen Univ, Sch Software Engn, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.01575
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision transformers (ViTs) have been successfully deployed in a variety of computer vision tasks, but they are still vulnerable to adversarial samples. Transfer-based attacks use a local model to generate adversarial samples and directly transfer them to attack a target black-box model. The high efficiency of transfer-based attacks makes it a severe security threat to ViT-based applications. Therefore, it is vital to design effective transfer-based attacks to identify the deficiencies of ViTs beforehand in security-sensitive scenarios. Existing efforts generally focus on regularizing the input gradients to stabilize the updated direction of adversarial samples. However, the variance of the back-propagated gradients in intermediate blocks of ViTs may still be large, which may make the generated adversarial samples focus on some model-specific features and get stuck in poor local optima. To overcome the shortcomings of existing approaches, we propose the Token Gradient Regularization (TGR) method. According to the structural characteristics of ViTs, TGR reduces the variance of the back-propagated gradient in each internal block of ViTs in a token-wise manner and utilizes the regularized gradient to generate adversarial samples. Extensive experiments on attacking both ViTs and CNNs confirm the superiority of our approach. Notably, compared to the state-of-the-art transfer-based attacks, our TGR offers a performance improvement of 8.8% on average.
引用
收藏
页码:16415 / 16424
页数:10
相关论文
共 50 条
  • [1] Towards Transferable Adversarial Attacks on Vision Transformers
    Wei, Zhipeng
    Chen, Jingjing
    Goldblum, Micah
    Wu, Zuxuan
    Goldstein, Tom
    Jiang, Yu-Gang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2668 - 2676
  • [2] Towards transferable adversarial attacks on vision transformers for image classification
    Guo, Xu
    Chen, Peng
    Lu, Zhihui
    Chai, Hongfeng
    Du, Xin
    Wu, Xudong
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 152
  • [3] TIA: Token Importance Transferable Attack on Vision Transformers
    Fu, Tingchao
    Li, Fanxiao
    Zhang, Jinhong
    Zhu, Liang
    Wang, Yuanyu
    Zhou, Wei
    INFORMATION SECURITY AND CRYPTOLOGY, INSCRYPT 2023, PT II, 2024, 14527 : 91 - 107
  • [4] Towards Transferable Adversarial Attacks on Image and Video Transformers
    Wei, Zhipeng
    Chen, Jingjing
    Goldblum, Micah
    Wu, Zuxuan
    Goldstein, Tom
    Jiang, Yu-Gang
    Davis, Larry S.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6346 - 6358
  • [5] Generating Transferable Adversarial Examples against Vision Transformers
    Wang, Yuxuan
    Wang, Jiakai
    Yin, Zinxin
    Gong, Ruihao
    Wang, Jingyi
    Liu, Aishan
    Liu, Xianglong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5181 - 5190
  • [6] Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNs
    Dingeto, Hiskias
    Kim, Juntae
    ELECTRONICS, 2024, 13 (13)
  • [7] Gradient-based Adversarial Attacks against Text Transformers
    Guo, Chuan
    Sablayrolles, Alexandre
    Jegou, Herve
    Kiela, Douwe
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5747 - 5757
  • [8] Improving transferable adversarial attack for vision transformers via global attention and local drop
    Tuo Li
    Yahong Han
    Multimedia Systems, 2023, 29 : 3467 - 3480
  • [9] Improving transferable adversarial attack for vision transformers via global attention and local drop
    Li, Tuo
    Han, Yahong
    MULTIMEDIA SYSTEMS, 2023, 29 (06) : 3467 - 3480
  • [10] Transferable Adversarial Attacks Against ASR
    Gao, Xiaoxue
    Li, Zexin
    Chen, Yiming
    Liu, Cong
    Li, Haizhou
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2200 - 2204