Enhancing Transferability of Adversarial Examples with Spatial Momentum

被引:8
|
作者
Wang, Guoqiu [1 ]
Yan, Huanqian [1 ]
Wei, Xingxing [2 ]
机构
[1] Beihang Univ, Beijing Key Lab Digital Media DML, Sch Comp Sci & Engn, Beijing, Peoples R China
[2] Beihang Univ, Inst Artificial Intelligence, Hangzhou Innovat Inst, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Adversarial attack; Adversarial transferability; Momentum-based attack;
D O I
10.1007/978-3-031-18907-4_46
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many adversarial attack methods achieve satisfactory attack success rates under the white-box setting, but they usually show poor transferability when attacking other DNN models. Momentum-based attack is one effective method to improve transferability. It integrates the momentum term into the iterative process, which can stabilize the update directions by adding the gradients' temporal correlation for each pixel. We argue that only this temporal momentum is not enough, the gradients from the spatial domain within an image, i.e. gradients from the context pixels centered on the target pixel are also important to the stabilization. For that, we propose a novel method named Spatial Momentum Iterative FGSM attack (SMI-FGSM), which introduces the mechanism of momentum accumulation from temporal domain to spatial domain by considering the context information from different regions within the image. SMI-FGSM is then integrated with temporal momentum to simultaneously stabilize the gradients' update direction from both the temporal and spatial domains. Extensive experiments show that our method indeed further enhances adversarial transferability. It achieves the best transferability success rate for multiple mainstream undefended and defended models, which outperforms the state-of-the-art attack methods by a large margin of 10% on average.
引用
收藏
页码:593 / 604
页数:12
相关论文
共 50 条
  • [1] Enhancing the Transferability of Adversarial Examples Based on Nesterov Momentum for Recommendation Systems
    Qian, Fulan
    Yuan, Bei
    Chen, Hai
    Chen, Jie
    Lian, Defu
    Zhao, Shu
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (05) : 1276 - 1287
  • [2] Enhancing the Transferability of Adversarial Examples with Feature Transformation
    Xu, Hao-Qi
    Hu, Cong
    Yin, He-Feng
    MATHEMATICS, 2022, 10 (16)
  • [3] Enhancing the transferability of adversarial examples on vision transformers
    Guan, Yujiao
    Yang, Haoyu
    Qu, Xiaotong
    Wang, Xiaodong
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [4] Enhancing Transferability of Adversarial Examples by Successively Attacking Multiple Models
    Zhang, Xiaolin
    Zhang, Wenwen
    Liu, Lixin
    Wang, Yongping
    Gao, Lu
    Zhang, Shuai
    International Journal of Network Security, 2023, 25 (02) : 306 - 316
  • [5] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [6] Enhancing transferability of adversarial examples via rotation-invariant attacks
    Duan, Yexin
    Zou, Junhua
    Zhou, Xingyu
    Zhang, Wu
    Zhang, Jin
    Pan, Zhisong
    IET COMPUTER VISION, 2022, 16 (01) : 1 - 11
  • [7] Enhancing Transferability of Adversarial Examples Through Mixed-Frequency Inputs
    Qian, Yaguan
    Chen, Kecheng
    Wang, Bin
    Gu, Zhaoquan
    Ji, Shouling
    Wang, Wei
    Zhang, Yanchun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 7633 - 7645
  • [8] Enhancing transferability of adversarial examples with pixel-level scale variation
    Mao, Zhongshu
    Lu, Yiqin
    Cheng, Zhe
    Shen, Xiong
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 118
  • [9] An approach to improve transferability of adversarial examples
    Zhang, Weihan
    Guo, Ying
    PHYSICAL COMMUNICATION, 2024, 64
  • [10] Remix: Towards the transferability of adversarial examples
    Zhao, Hongzhi
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Cai, Xin
    NEURAL NETWORKS, 2023, 163 : 367 - 378