No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures

被引:0
|
作者
Mou, Ningping [1 ,2 ]
Guo, Binqing [1 ]
Zhao, Lingchen [1 ]
Wang, Cong [3 ]
Zhao, Yue
Wang, Qian [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
[2] City Univ Hong Kong, Dept Comp Sci, Hong Kong 999077, Peoples R China
[3] City Univ Hong Kong, Dept Comp Sci, Hong Kong 999077, Peoples R China
关键词
Perturbation methods; Glass box; Closed box; Threat modeling; Training; Optimization; Deep learning; Urban areas; Trusted computing; Information security; Universal adversarial perturbation; no-box attack; artificial texture; starting point;
D O I
10.1109/TIFS.2024.3478828
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of l infinity = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.
引用
收藏
页码:9803 / 9818
页数:16
相关论文
共 50 条
  • [1] Practical No-box Adversarial Attacks against DNNs
    Li, Qizhang
    Guo, Yiwen
    Chen, Hao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] Universal Adversarial Perturbations Against Semantic Image Segmentation
    Metzen, Jan Hendrik
    Kumar, Mummadi Chaithanya
    Brox, Thomas
    Fischer, Volker
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2774 - 2783
  • [3] Universal adversarial examples and perturbations for quantum classifiers
    Gong, Weiyuan
    Deng, Dong-Ling
    NATIONAL SCIENCE REVIEW, 2022, 9 (06)
  • [4] Universal adversarial examples and perturbations for quantum classifiers
    Weiyuan Gong
    Dong-Ling Deng
    NationalScienceReview, 2022, 9 (06) : 48 - 55
  • [5] Generating Universal Adversarial Perturbations for Quantum Classifiers
    Anil, Gautham
    Vinod, Vishnu
    Narayan, Apurva
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 10891 - 10899
  • [6] Enhancing image steganography security via universal adversarial perturbations
    Liu L.
    Liu X.
    Wang D.
    Yang G.
    Multimedia Tools and Applications, 2025, 84 (3) : 1303 - 1315
  • [7] Defense against Universal Adversarial Perturbations
    Akhtar, Naveed
    Liu, Jian
    Mian, Ajmal
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3389 - 3398
  • [8] Universal adversarial perturbations for multiple classification tasks with quantum classifiers
    Qiu, Yun-Zhong
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (04):
  • [9] Universal adversarial perturbations against object detection
    Li, Debang
    Zhang, Junge
    Huang, Kaiqi
    PATTERN RECOGNITION, 2021, 110
  • [10] Black-box Universal Adversarial Attack on Text Classifiers
    Zhang, Yu
    Shao, Kun
    Yang, Junan
    Liu, Hui
    2021 2ND ASIA CONFERENCE ON COMPUTERS AND COMMUNICATIONS (ACCC 2021), 2021, : 1 - 5