Improving the transferability of adversarial attacks via self-ensemble

被引:1
|
作者
Cheng, Shuyan [1 ]
Li, Peng [1 ]
Liu, Jianguo [1 ]
Xu, He [1 ]
Yao, Yudong [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing 210023, Peoples R China
[2] Stevens Inst Technol, Dept Elect & Comp Engn, Hoboken, NJ 07030 USA
基金
中国国家自然科学基金;
关键词
Black-box attacks; Transferability; Adversarial examples; Self-ensemble; Feature importance;
D O I
10.1007/s10489-024-05728-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have been used extensively for diverse visual tasks, including object detection, face recognition, and image classification. However, they face several security threats, such as adversarial attacks. To improve the resistance of neural networks to adversarial attacks, researchers have investigated the security issues of models from the perspectives of both attacks and defenses. Recently, the transferability of adversarial attacks has received extensive attention, which promotes the application of adversarial attacks in practical scenarios. However, existing transferable attacks tend to trap into a poor local optimum and significantly degrade the transferability because the production of adversarial samples lacks randomness. Therefore, we propose a self-ensemble-based feature-level adversarial attack (SEFA) to boost transferability by randomly disrupting salient features. We provide theoretical analysis to demonstrate the superiority of the proposed method. In particular, perturbing the refined feature importance weighted intermediate features suppresses positive features and encourages negative features to realize adversarial attacks. Subsequently, self-ensemble is introduced to solve the optimization problem, thus enhancing the diversity from an optimization perspective. The diverse orthogonal initial perturbations disrupt these features stochastically, searching the space of transferable perturbations exhaustively to avoid poor local optima and improve transferability effectively. Extensive experiments show the effectiveness and superiority of the proposed SEFA, i.e., the success rates against undefended models and defense models are improved by 7.7% and 13.4%, respectively, compared with existing transferable attacks. Our code is available at https://github.com/chengshuyan/SEFA.
引用
收藏
页码:10608 / 10626
页数:19
相关论文
共 50 条
  • [41] Enhancing the Transferability of Adversarial Attacks through Variance Tuning
    Wang, Xiaosen
    He, Kun
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1924 - 1933
  • [42] Studying the Transferability of Non-Targeted Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [43] Enhancing the transferability of adversarial attacks with diversified input strategies
    Li Z.
    Chen Y.
    Yang B.
    Li C.
    Zhang S.
    Li W.
    Zhang H.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 10359 - 10373
  • [44] On the Transferability of Adversarial Attacks against Neural Text Classifier
    Yuan, Liping
    Zheng, Xiaoqing
    Zhou, Yi
    Hsieh, Cho-Jui
    Chang, Kai-Wei
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1612 - 1625
  • [45] Harmonizing Transferability and Imperceptibility: A Novel Ensemble Adversarial Attack
    Zhang, Rui
    Xia, Hui
    Kang, Zi
    Li, Zhengheng
    Du, Yu
    Gao, Mingyang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (15): : 25625 - 25636
  • [46] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [47] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [48] Improving adversarial transferability through hybrid augmentation
    Zhu, Peican
    Fan, Zepeng
    Guo, Sensen
    Tang, Keke
    Li, Xingyu
    COMPUTERS & SECURITY, 2024, 139
  • [49] Improving the transferability of adversarial samples with channel switching
    Ling, Jie
    Chen, Xiaohuan
    Luo, Yu
    APPLIED INTELLIGENCE, 2023, 53 (24) : 30580 - 30592
  • [50] Improving the transferability of adversarial samples with channel switching
    Jie Ling
    Xiaohuan Chen
    Yu Luo
    Applied Intelligence, 2023, 53 : 30580 - 30592