Frequency-based methods for improving the imperceptibility and transferability of adversarial examples

被引:2
|
作者
Zhu, Hegui [1 ]
Ren, Yuchen [1 ]
Liu, Chong [1 ]
Sui, Xiaoyan [1 ]
Zhang, Libo [2 ]
机构
[1] Northeastern Univ, Coll Sci, Shenyang 110819, Peoples R China
[2] Gen Hosp Northern Theater Command PLA, Dept Radiol, Shenyang 110016, Peoples R China
关键词
Adversarial attack; Frequency information; Normal projection; Frequency spectrum diversity transformation; Frequency dropout;
D O I
10.1016/j.asoc.2023.111088
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The adversarial attack is a popular technology to evaluate the robustness of deep learning models. However, adversarial examples crafted by current methods often have poor imperceptibility and low transferability, hindering the utility of attacks in practice. In this paper, we creatively leverage the frequency information to promote the imperceptibility and adversarial transferability in the white-box scenario and black-box scenario, respectively. Specifically, in the white-box scenario, we adopt the low-frequency constraint and normal projection to improve the imperceptibility of the adversarial example without reducing the attack performance. In the black-box scenario, we propose an effective Frequency Spectrum Diversity Transformation (FSDT) to address the issue of overfitting to the substitute model. FSDT enriches the input with a diverse set of unfamiliar information, significantly improving the transferability of adversarial attacks. Towards those defended target models in the black-box scenario, we also design a gradient refinement technology named Frequency Dropout (FD) to discard some useless components of gradients in the frequency domain, which can further mitigate the protective effect of defense mechanisms. Plentiful experiments forcefully validate the superiority of our proposed methods. Furthermore, we apply the proposed method to evaluate the robustness of real-world online models and discover their vulnerability. Finally, we analyze why imperceptibility and adversarial transferability are hard to improve concurrently from the view of frequency. Our codes are available at https://github.com/RYC-98/FSD-MIM-and-NPGA.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Improving Adversarial Transferability via Frequency-based Stationary Point Search
    Zhu, Zhiyu
    Chen, Huaming
    Zhang, Jiayu
    Wang, Xinyi
    Jin, Zhibo
    Lu, Qinghua
    Shen, Jun
    Choo, Kim-Kwang Raymond
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3626 - 3635
  • [2] FDT: Improving the transferability of adversarial examples with frequency domain transformation
    Ling, Jie
    Chen, Jinhui
    Li, Honglei
    [J]. Computers and Security, 2024, 144
  • [3] Rethinking Adversarial Examples Exploiting Frequency-Based Analysis
    Han, Sicong
    Lin, Chenhao
    Shen, Chao
    Wang, Qian
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT II, 2021, 12919 : 73 - 89
  • [4] Improving the Imperceptibility of Adversarial Examples Based on Weakly Perceptual Perturbation in Key Regions
    Wang, Yekui
    Cao, Tieyong
    Zheng, Yunfei
    Fang, Zheng
    Wang, Yang
    Liu, Yajiu
    Chen, Lei
    Fu, Bingyang
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [5] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [6] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    [J]. APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [7] Improving the Transferability of Adversarial Examples with Diverse Gradients
    Cao, Yangjie
    Wang, Haobo
    Zhu, Chenxi
    Zhuang, Yan
    Li, Jie
    Chen, Xianfu
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer
    Ge, Zhijin
    Shang, Fanhua
    Liu, Hongying
    Liu, Yuanyuan
    Wan, Liang
    Feng, Wei
    Wang, Xiaosen
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4440 - 4449
  • [9] Improving the transferability of adversarial examples through neighborhood attribution
    Ke, Wuping
    Zheng, Desheng
    Li, Xiaoyu
    He, Yuanhang
    Li, Tianyu
    Min, Fan
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [10] Improving the transferability of adversarial examples via direction tuning
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Yang, Xinyu
    Zhao, Peng
    [J]. INFORMATION SCIENCES, 2023, 647