TransNoise: Transferable Universal Adversarial Noise for Adversarial Attack

被引:0
|
作者
Wei, Yier [1 ]
Gao, Haichang [1 ]
Wang, Yufei [1 ]
Liu, Huan [1 ]
Gao, Yipeng [1 ]
Luo, Sainan [1 ]
Guo, Qianwen [2 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[2] SongShan Lab, Zhengzhou 452470, Peoples R China
关键词
Adversarial attack; Universal adversarial noise; Deep neural networks;
D O I
10.1007/978-3-031-44192-9_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have been proven to be vulnerable to adversarial attacks. The early attacks mostly involved image-specific approaches that generated specific adversarial noises for each individual image. More recent studies have further demonstrated that neural networks can also be fooled by image-agnostic noises, called "universal adversarial perturbation". However, the current universal adversarial attacks mainly focus on untargeted attacks and exhibit poor transferability. In this paper, we propose TransNoise, a new approach for implementing a transferable universal adversarial attack that involves modifying only a few pixels of the image. Our approach achieves state-of-art success rates in the universal adversarial attack domain for both targeted and nontarget settings. The experimental results demonstrate that our method outperforms the current methods from three aspects of universality: 1) by adding our universal adversarial noises to different images, the fooling rates of our method on the target model are almost all above 95%; 2) when no training data are available for the targeted model, our method is still able to implement targeted attacks; 3) the method transfers well across different models in the untargeted setting.
引用
收藏
页码:193 / 205
页数:13
相关论文
共 50 条
  • [41] An Universal Adversarial Attack Method Based on Spherical Projection
    Fan, Chunlong
    Zhang, Zhimin
    Qiao, Jianzhong
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (02)
  • [42] Targeted Universal Adversarial Attack on Deep Hash Networks
    Meng, Fanlei
    Chen, Xiangru
    Cao, Yuan
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 165 - 174
  • [43] Universal Adversarial Attack on Deep Learning Based Prognostics
    Basak, Arghya
    Rathore, Pradeep
    Nistala, Sri Harsha
    Srinivas, Sagar
    Runkana, Venkataramana
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 23 - 29
  • [44] GNP ATTACK: TRANSFERABLE ADVERSARIAL EXAMPLES VIA GRADIENT NORM PENALTY
    Wu, Tao
    Luo, Tie
    Wunsch, Donald C.
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3110 - 3114
  • [45] FDAA: A feature distribution-aware transferable adversarial attack method
    Li, Jiachun
    Hu, Yuchao
    Yan, Cheng
    NEURAL NETWORKS, 2024, 178
  • [46] Universal Physical Adversarial Attack via Background Image
    Xu, Yidan
    Wang, Juan
    Li, Yuanzhang
    Wang, Yajie
    Xu, Zixuan
    Wang, Dianxin
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2022, 2022, 13285 : 3 - 14
  • [47] Transferable adversarial attack based on sensitive perturbation analysis in frequency domain ☆
    Liu, Yong
    Li, Chen
    Wang, Zichi
    Wu, Hanzhou
    Zhang, Xinpeng
    INFORMATION SCIENCES, 2024, 678
  • [48] Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization
    Yang, Yulong
    Lin, Chenhao
    Li, Qian
    Zhao, Zhengyu
    Fan, Haoran
    Zhou, Dawei
    Wang, Nannan
    Liu, Tongliang
    Shen, Chao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3265 - 3278
  • [49] Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings
    Kireev, Klim
    Andriushchenko, Maksym
    Troncoso, Carmela
    Flammarion, Nicolas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] CommanderUAP: a practical and transferable universal adversarial attacks on speech recognition models
    Sun, Zheng
    Zhao, Jinxiao
    Guo, Feng
    Chen, Yuxuan
    Ju, Lei
    CYBERSECURITY, 2024, 7 (01):