Data-Free Adversarial Perturbations for Practical Black-Box Attack

被引:7
|
作者
Huan, Zhaoxin [1 ,2 ]
Wang, Yulong [2 ,3 ]
Zhang, Xiaolu [2 ]
Shang, Lin [1 ]
Fu, Chilin [2 ]
Zhou, Jun [2 ]
机构
[1] Nanjing Univ, Dept Comp Sci & Technol, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Ant Financial Serv Grp, Hangzhou, Peoples R China
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial machine learning; Black-box adversarial perturbations;
D O I
10.1007/978-3-030-47436-2_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-box attack methods require samples from the training data distribution to improve the transferability of adversarial examples across different models. Because of the data dependence, fooling ability of adversarial perturbations is only applicable when training data are accessible. In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution. In the practical setting of black-box attack scenario where attackers do not have access to target models and training data, our method achieves high fooling rates on target models and outperforms other universal adversarial perturbation methods. Our method empirically shows that current deep learning models are still at a risk even when the attackers do not have access to training data.
引用
收藏
页码:127 / 138
页数:12
相关论文
共 50 条
  • [31] Black-box adversarial attacks on XSS attack detection model
    Wang, Qiuhua
    Yang, Hui
    Wu, Guohua
    Choo, Kim-Kwang Raymond
    Zhang, Zheng
    Miao, Gongxun
    Ren, Yizhi
    COMPUTERS & SECURITY, 2022, 113
  • [32] Optimized Gradient Boosting Black-Box Adversarial Attack Algorithm
    Liu, Mengting
    Ling, Jie
    Computer Engineering and Applications, 2023, 59 (18) : 260 - 267
  • [33] Evolutionary Multilabel Adversarial Examples: An Effective Black-Box Attack
    Kong L.
    Luo W.
    Zhang H.
    Liu Y.
    Shi Y.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (03): : 562 - 572
  • [34] Substitute Meta-Learning for Black-Box Adversarial Attack
    Hu, Cong
    Xu, Hao-Qi
    Wu, Xiao-Jun
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2472 - 2476
  • [35] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [36] Black-box Adversarial Attack on License Plate Recognition System
    Chen J.-Y.
    Shen S.-J.
    Su M.-M.
    Zheng H.-B.
    Xiong H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2021, 47 (01): : 121 - 135
  • [37] Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
    Mopuri, Konda Reddy
    Ganeshan, Aditya
    Babu, R. Venkatesh
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (10) : 2452 - 2465
  • [38] Black-box attacks on dynamic graphs via adversarial topology perturbations
    Tao, Haicheng
    Cao, Jie
    Chen, Lei
    Sun, Hongliang
    Shi, Yong
    Zhu, Xingquan
    NEURAL NETWORKS, 2024, 171 : 308 - 319
  • [39] Simultaneously Optimizing Perturbations and Positions for Black-Box Adversarial Patch Attacks
    Wei, Xingxing
    Guo, Ying
    Yu, Jie
    Zhang, Bo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 9041 - 9054
  • [40] Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving
    Yanfei Wang
    Kai Zhang
    Kejie Lu
    Yun Xiong
    Mi Wen
    Peer-to-Peer Networking and Applications, 2023, 16 : 295 - 311