Towards transferable adversarial attacks on vision transformers for image classification

被引:1
|
作者
Guo, Xu [1 ]
Chen, Peng [1 ]
Lu, Zhihui [1 ,2 ]
Chai, Hongfeng [1 ,3 ]
Du, Xin [1 ]
Wu, Xudong [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[2] Shanghai Blockchain Engn Res Ctr, Shanghai 200433, Peoples R China
[3] Fudan Univ, Inst Financial Technol, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial example; Transfer attack; Surrogate model; Vision transformer; Fintech regulation; Image classification;
D O I
10.1016/j.sysarc.2024.103155
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The deployment of high-performance Vision Transformer (ViT) models has garnered attention from both industry and academia. However, their vulnerability to adversarial examples highlights security risks for scenarios such as intelligent surveillance, autonomous driving, and fintech regulation. As a black-box attack technique, transfer attacks leverage a surrogate model to generate transferable adversarial examples to attack a target victim model, which mainly focuses on a forward (input diversification) and a backward (gradient modification) approach. However, both approaches are currently implemented straightforwardly and limit the transferability of surrogate models. In this paper, we propose a Forward-Backward Transferable Adversarial Attack framework (FBTA) that can generate highly transferable adversarial examples against different models by fully leveraging ViT's distinctive intermediate layer structures. In the forward inference process of FBTA, we propose a Dropout-based Transferable Attack (DTA) approach to diversify the intermediate states of ViT models, simulating an ensemble learning effect; in the backward process, a Backpropagation Gradient Clipping (BGC) method is designed to refine the gradients within intermediate layers of ViT models intricately. Extensive experiments on state-of-the-art ViTs and robust CNNs demonstrate that our FBTA framework achieves an average performance improvement of 2.79% compared to state-of-the-art transfer-based attacks, offering insights for the comprehension and defense against transfer attacks.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Feature Importance-aware Transferable Adversarial Attacks
    Wang, Zhibo
    Guo, Hengchang
    Zhang, Zhifei
    Liu, Wenxin
    Qin, Zhan
    Ren, Kui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7619 - 7628
  • [42] Harnessing the power of vision transformers for enhanced OCT image classification
    Paraschiv, Elena-Anca
    Sultana, Alina-Elena
    ROMANIAN JOURNAL OF INFORMATION TECHNOLOGY AND AUTOMATIC CONTROL-REVISTA ROMANA DE INFORMATICA SI AUTOMATICA, 2024, 34 (02):
  • [43] Transferable Adversarial Attacks for Deep Scene Text Detection
    Wu, Shudeng
    Dai, Tao
    Meng, Guanghao
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8945 - 8951
  • [44] FAPA: Transferable Adversarial Attacks Based on Foreground Attention
    Yang, Zhifei
    Li, Wenmin
    Gao, Fei
    Wen, Qiaoyan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [45] TIA: Token Importance Transferable Attack on Vision Transformers
    Fu, Tingchao
    Li, Fanxiao
    Zhang, Jinhong
    Zhu, Liang
    Wang, Yuanyu
    Zhou, Wei
    INFORMATION SECURITY AND CRYPTOLOGY, INSCRYPT 2023, PT II, 2024, 14527 : 91 - 107
  • [46] Automatic classification of ultrasound thyroids images using vision transformers and generative adversarial networks
    Jerbi, Feres
    Aboudi, Noura
    Khlifa, Nawres
    SCIENTIFIC AFRICAN, 2023, 20
  • [47] Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks
    Perez-Bravo, Jose M.
    Rodriguez-Rodriguez, Jose A.
    Garcia-Gonzalez, Jorge
    Molina-Cabello, Miguel A.
    Thurnhofer-Hemsi, Karl
    Lopez-Rubio, Ezequiel
    BIO-INSPIRED SYSTEMS AND APPLICATIONS: FROM ROBOTICS TO AMBIENT INTELLIGENCE, PT II, 2022, 13259 : 163 - 172
  • [48] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    BMC Medical Imaging, 21
  • [49] Mask-guided noise restriction adversarial attacks for image classification
    Duan, Yexin
    Zhou, Xingyu
    Zou, Junhua
    Qiu, Junyang
    Zhang, Jin
    Pan, Zhisong
    COMPUTERS & SECURITY, 2021, 100
  • [50] Towards Evaluating the Robustness of Adversarial Attacks Against Image Scaling Transformation
    ZHENG Jiamin
    ZHANG Yaoyuan
    LI Yuanzhang
    WU Shangbo
    YU Xiao
    Chinese Journal of Electronics, 2023, 32 (01) : 151 - 158