TransNoise: Transferable Universal Adversarial Noise for Adversarial Attack

被引:0
|
作者
Wei, Yier [1 ]
Gao, Haichang [1 ]
Wang, Yufei [1 ]
Liu, Huan [1 ]
Gao, Yipeng [1 ]
Luo, Sainan [1 ]
Guo, Qianwen [2 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[2] SongShan Lab, Zhengzhou 452470, Peoples R China
关键词
Adversarial attack; Universal adversarial noise; Deep neural networks;
D O I
10.1007/978-3-031-44192-9_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have been proven to be vulnerable to adversarial attacks. The early attacks mostly involved image-specific approaches that generated specific adversarial noises for each individual image. More recent studies have further demonstrated that neural networks can also be fooled by image-agnostic noises, called "universal adversarial perturbation". However, the current universal adversarial attacks mainly focus on untargeted attacks and exhibit poor transferability. In this paper, we propose TransNoise, a new approach for implementing a transferable universal adversarial attack that involves modifying only a few pixels of the image. Our approach achieves state-of-art success rates in the universal adversarial attack domain for both targeted and nontarget settings. The experimental results demonstrate that our method outperforms the current methods from three aspects of universality: 1) by adding our universal adversarial noises to different images, the fooling rates of our method on the target model are almost all above 95%; 2) when no training data are available for the targeted model, our method is still able to implement targeted attacks; 3) the method transfers well across different models in the untargeted setting.
引用
收藏
页码:193 / 205
页数:13
相关论文
共 50 条
  • [11] Appending Adversarial Frames for Universal Video Attack
    Chen, Zhikai
    Xie, Lingxi
    Pang, Shanmin
    He, Yong
    Tian, Qi
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3198 - 3207
  • [12] Layerwise universal adversarial attack on NLP models
    Tsymboi, Olga
    Malaev, Danil
    Petrovskii, Andrei
    Oseledets, Ivan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 129 - 143
  • [13] Transferable Sparse Adversarial Attack on Modulation Recognition With Generative Networks
    Jiang, Zenghui
    Zeng, Weijun
    Zhou, Xingyu
    Chen, Pu
    Yin, Shenqian
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (05) : 999 - 1003
  • [14] Understanding Universal Adversarial Attack and Defense on Graph
    Wang, Tianfeng
    Pan, Zhisong
    Hu, Guyu
    Duan, Yexin
    Pan, Yu
    INTERNATIONAL JOURNAL ON SEMANTIC WEB AND INFORMATION SYSTEMS, 2022, 18 (01)
  • [15] Intermediate-Layer Transferable Adversarial Attack With DNN Attention
    Yang, Shanshan
    Yang, Yu
    Zhou, Linna
    Zhan, Rui
    Man, Yufei
    IEEE ACCESS, 2022, 10 : 95451 - 95461
  • [16] Training Meta-Surrogate Model for Transferable Adversarial Attack
    Qin, Yunxiao
    Xiong, Yuanhao
    Yi, Jinfeng
    Hsieh, Cho-Jui
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9516 - 9524
  • [17] Towards the transferable audio adversarial attack via ensemble methods
    Guo, Feng
    Sun, Zheng
    Chen, Yuxuan
    Ju, Lei
    CYBERSECURITY, 2023, 6 (01)
  • [18] A Transferable Adversarial Belief Attack With Salient Region Perturbation Restriction
    Zhang, Shihui
    Zuo, Dongxu
    Yang, Yongliang
    Zhang, Xiaowei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 4296 - 4306
  • [19] Black-box Bayesian adversarial attack with transferable priors
    Zhang, Shudong
    Gao, Haichang
    Shu, Chao
    Cao, Xiwen
    Zhou, Yunyi
    He, Jianping
    MACHINE LEARNING, 2024, 113 (04) : 1511 - 1528
  • [20] Transferable Interpolated Adversarial Attack with Random-Layer Mixup
    Ma, Size
    Han, Keji
    Long, Xianzhong
    Li, Yun
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 224 - 235