DFDS: Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack

被引:0
|
作者
Jiang, Shuliang [1 ]
He, Yusheng [1 ]
Zhang, Rui [1 ]
Kang, Zi [1 ]
Xia, Hui [1 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial attack; White-box/black-box attack; Transfer-based adversarial attacks; Adversarial examples;
D O I
10.1007/978-981-97-5498-4_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer-based hard-label black-box adversarial attacks, confront challenges in obtaining pertinent proxy datasets and demanding a substantial query volume to the target model without guaranteeing a high attack success rate. To address the challenges, we introduces the techniques of dual substitute model extraction and embedding space adversarial example search, proposing a novel hard-label black-box adversarial attack approach named Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack (DFDS). This approach initially trains a generative adversarial network through adversarial training. This training is achieved without relying on proxy datasets, only depending on the hard-label outputs of the target model. Subsequently, it utilizes natural evolution strategy (NES) to conduct embedding space search for constructing the final adversarial examples. The comprehensive experimental results demonstrate that, under the same query volume, DFDS achieves higher attack success rates compared to baseline methods. In comparison to the state-of-the-art mixed-mechanism hard-label black-box attack approach DFMS-HL, DFDS exhibits significant improvements across the SVHN, CIFAR-10, and CIFAR-100 datasets. Significantly, in the targeted attack scenario on the CIFAR-10 dataset, the success rate reaches 76.59%, representing the highest enhancement of 21.99%.
引用
收藏
页码:274 / 285
页数:12
相关论文
共 50 条
  • [31] Adaptive hyperparameter optimization for black-box adversarial attack
    Zhenyu Guan
    Lixin Zhang
    Bohan Huang
    Bihe Zhao
    Song Bian
    [J]. International Journal of Information Security, 2023, 22 : 1765 - 1779
  • [32] SCHMIDT: IMAGE AUGMENTATION FOR BLACK-BOX ADVERSARIAL ATTACK
    Shi, Yucheng
    Han, Yahong
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [33] Black-box Bayesian adversarial attack with transferable priors
    Shudong Zhang
    Haichang Gao
    Chao Shu
    Xiwen Cao
    Yunyi Zhou
    Jianping He
    [J]. Machine Learning, 2024, 113 : 1511 - 1528
  • [34] Black-Box Adversarial Attack via Overlapped Shapes
    Williams, Phoenix
    Li, Ke
    Min, Geyong
    [J]. PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 467 - 468
  • [35] Adaptive hyperparameter optimization for black-box adversarial attack
    Guan, Zhenyu
    Zhang, Lixin
    Huang, Bohan
    Zhao, Bihe
    Bian, Song
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2023, 22 (06) : 1765 - 1779
  • [36] Black-box Universal Adversarial Attack on Text Classifiers
    Zhang, Yu
    Shao, Kun
    Yang, Junan
    Liu, Hui
    [J]. 2021 2ND ASIA CONFERENCE ON COMPUTERS AND COMMUNICATIONS (ACCC 2021), 2021, : 1 - 5
  • [37] Black-Box Adversarial Attack on Time Series Classification
    Ding, Daizong
    Zhang, Mi
    Feng, Fuli
    Huang, Yuanmin
    Jiang, Erling
    Yang, Min
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7358 - 7368
  • [38] Poisonous Label Attack: Black-Box Data Poisoning Attack with Enhanced Conditional DCGAN
    Liu, Haiqing
    Li, Daoxing
    Li, Yuancheng
    [J]. NEURAL PROCESSING LETTERS, 2021, 53 (06) : 4117 - 4142
  • [39] Poisonous Label Attack: Black-Box Data Poisoning Attack with Enhanced Conditional DCGAN
    Haiqing Liu
    Daoxing Li
    Yuancheng Li
    [J]. Neural Processing Letters, 2021, 53 : 4117 - 4142
  • [40] BFS2Adv: Black-box adversarial attack towards hard-to-attack short texts
    Han, Xu
    Li, Qiang
    Cao, Hongbo
    Han, Lei
    Wang, Bin
    Bao, Xuhua
    Han, Yufei
    Wang, Wei
    [J]. COMPUTERS & SECURITY, 2024, 141