Towards Lightweight Black-Box Attacks Against Deep Neural Networks

被引:0
|
作者
Sun, Chenghao [1 ]
Zhang, Yonggang [2 ]
Wan Chaoqun [3 ]
Wang, Qizhou [2 ]
Li, Ya [4 ]
Liu, Tongliang [5 ]
Han, Bo [2 ]
Tian, Xinmei [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] Hong Kong Baptist Univ, Hong Kong, Peoples R China
[3] Alibaba Cloud Comp Ltd, Beijing, Peoples R China
[4] IFlytek Res, Hefei, Anhui, Peoples R China
[5] Univ Sydney, Sydney, NSW 2006, Australia
基金
澳大利亚研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Black-box attacks can generate adversarial examples without accessing the parameters of deep neural networks (DNNs), largely exacerbating the threats of deployed models. However, previous works state that black-box attacks fail to mislead DNNs when their training data and outputs are inaccessible. In this work, we argue that black-box attacks can pose practical attacks in this highly restrictive scenario where only several test samples are available. Specifically, we find that attacking the shallow layers of DNNs trained on a few test samples can generate powerful adversarial examples. As only a few samples are required, we refer to these attacks as lightweight black-box attacks. The main challenge to promoting lightweight attacks is to mitigate the adverse impact caused by the approximation error of shallow layers. As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks. Namely, ETF transforms the approximation error in the parameter space into a perturbation in the feature space and alleviates the error by disturbing features. In our experiments, lightweight black-box attacks with the proposed ETF achieve surprising results. For example, even if only 1 sample per category is available, the attack success rate achieved by lightweight black-box attacks is only about 3% lower than that of the black-box attacks using complete training data.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [2] Black-Box Testing of Deep Neural Networks
    Byun, Taejoon
    Rayadurgam, Sanjai
    Heimdahl, Mats P. E.
    [J]. 2021 IEEE 32ND INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE 2021), 2021, : 309 - 320
  • [3] Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
    Bhagoji, Arjun Nitin
    He, Warren
    Li, Bo
    Song, Dawn
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 158 - 174
  • [4] Orthogonal Deep Models as Defense Against Black-Box Attacks
    Jalwana, Mohammad A. A. K.
    Akhtar, Naveed
    Bennamoun, Mohammed
    Mian, Ajmal
    [J]. IEEE ACCESS, 2020, 8 : 119744 - 119757
  • [5] Black-box Attacks Against Neural Binary Function Detection
    Bundt, Joshua
    Davinroy, Michael
    Agadakos, Ioannis
    Oprea, Alina
    Robertson, William
    [J]. PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 1 - 16
  • [6] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    [J]. 2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [7] Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification
    Koga, Kazuki
    Takemoto, Kazuhiro
    [J]. ALGORITHMS, 2022, 15 (05)
  • [8] Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
    Li, Huiying
    Shan, Shawn
    Wenger, Emily
    Zhang, Jiayun
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 2117 - 2134
  • [9] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    [J]. PATTERN RECOGNITION, 2020, 101
  • [10] Online Black-Box Confidence Estimation of Deep Neural Networks
    Woitschek, Fabian
    Schneider, Georg
    [J]. 2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 183 - 189