Enhancing the transferability of adversarial samples with random noise techniques

被引:2
|
作者
Huang, Jiahao [1 ]
Wen, Mi [1 ]
Wei, Minjie [1 ]
Bi, Yanbing [2 ]
机构
[1] Shanghai Univ Elect Power, Coll Comp Sci & Technol, Shanghai 201306, Peoples R China
[2] State Grid info & Telecom Grp, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Adversarial samples; Adversarial attack; Adversarial transferability; DNN security; ARCHITECTURES;
D O I
10.1016/j.cose.2023.103541
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks have achieved remarkable success in the field of computer vision. However, they are susceptible to adversarial attacks. The transferability of adversarial samples has made practical black-box attacks feasible, underscoring the importance of research on transferability. Existing work indicates that adversarial samples tend to overfit to the source model, getting trapped in local optima, thereby reducing the transferability of adversarial samples. To address this issue, we propose the Random Noise Transfer Attack (RNTA) to search for adversarial samples in a larger data distribution, seeking the global optimum. Specifically, we suggest injecting multiple random noise perturbations into the sample before each iteration of sample optimization, effectively exploring the decision boundary within an extended data distribution space. By aggregating gradients, we identify a better global optimum, mitigating the issue of overfitting to the source model. Through extensive experiments on the large-scale visual classification task on ImageNet, we demonstrate that our method increases the success rate of momentum-based attacks by an average of 20.1%. Furthermore, our approach can be combined with existing attack methods, achieving a success rate of 94.3%, which highlights the insecurity of current models and defense mechanisms.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Enhancing adversarial transferability with partial blocks on vision transformer
    Yanyang Han
    Ju Liu
    Xiaoxi Liu
    Xiao Jiang
    Lingchen Gu
    Xuesong Gao
    Weiqiang Chen
    Neural Computing and Applications, 2022, 34 : 20249 - 20262
  • [22] Enhancing Transferability of Adversarial Examples by Successively Attacking Multiple Models
    Zhang, Xiaolin
    Zhang, Wenwen
    Liu, Lixin
    Wang, Yongping
    Gao, Lu
    Zhang, Shuai
    International Journal of Network Security, 2023, 25 (02) : 306 - 316
  • [23] ENHANCING THE ADVERSARIAL TRANSFERABILITY OF VISION TRANSFORMERS THROUGH PERTURBATION INVARIANCE
    Zeng Boheng
    2022 19TH INTERNATIONAL COMPUTER CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (ICCWAMTIP), 2022,
  • [24] ENHANCING ADVERSARIAL TRANSFERABILITY IN OBJECT DETECTION WITH BIDIRECTIONAL FEATURE DISTORTION
    Ding, Xinlong
    Chen, Jiansheng
    Yu, Hongwei
    Shang, Yu
    Ma, Huimin
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5525 - 5529
  • [25] LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C., II
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6135 - 6143
  • [26] Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability
    Gao, Junqi
    Qi, Biqing
    Li, Yao
    Guo, Zhichang
    Li, Dong
    Xing, Yuming
    Zhang, Dazhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [27] Improving the Transferability of Adversarial Samples by Path-Augmented Method
    Zhang, Jianping
    Huang, Jen-tse
    Wang, Wenxuan
    Li, Yichen
    Wu, Weibin
    Wang, Xiaosen
    Sue, Yuxin
    Lyu, Michael R.
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8173 - 8182
  • [28] Enhancing transferability of adversarial examples via rotation-invariant attacks
    Duan, Yexin
    Zou, Junhua
    Zhou, Xingyu
    Zhang, Wu
    Zhang, Jin
    Pan, Zhisong
    IET COMPUTER VISION, 2022, 16 (01) : 1 - 11
  • [29] Enhancing adversarial attack transferability with multi-scale feature attack
    Sun, Caixia
    Zou, Lian
    Fan, Cien
    Shi, Yu
    Liu, Yifeng
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2021, 19 (02)
  • [30] Enhancing the Transferability of Adversarial Examples Based on Nesterov Momentum for Recommendation Systems
    Qian, Fulan
    Yuan, Bei
    Chen, Hai
    Chen, Jie
    Lian, Defu
    Zhao, Shu
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (05) : 1276 - 1287