Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation

被引:1
|
作者
Aithal, Manjushree B. [1 ]
Li, Xiaohua [1 ]
机构
[1] Binghamton Univ, Dept Elect & Comp Engn, Binghamton, NY 13902 USA
关键词
Perturbation methods; Signal to noise ratio; Standards; Noise level; White noise; Noise measurement; Neural networks; Deep learning; adversarial machine learning; black-box attack; noise perturbation; performance analysis;
D O I
10.1109/ACCESS.2022.3146198
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In black-box adversarial attacks, attackers query the deep neural network (DNN) and use the query results to optimize the adversarial samples iteratively. In this paper, we study the method of adding white noise to the DNN output to mitigate such attacks. One of our unique contributions is a theoretical analysis of gradient signal-to-noise ratio (SNR), which shows the trade-off between the defense noise level and the attack query cost. The attacker's query count (QC) is derived mathematically as a function of noise standard deviation. This will guide the defender to find the appropriate noise level for mitigating attacks to the desired security level specified by QC and DNN performance loss. Our analysis shows that the added noise is drastically magnified by the small variation of DNN outputs, which makes the reconstructed gradient have an extremely low SNR. Adding slight white noise with a very small standard deviation, e.g., less than 0.01, is enough to increase QC by many orders of magnitude yet without introducing any noticeable classification accuracy reduction. Our experiments demonstrate that this method can effectively mitigate both soft-label and hard-label black-box attacks under realistic QC constraints. We also prove that this method outperforms many other defense methods and is robust to the attacker's countermeasures.
引用
收藏
页码:12395 / 12411
页数:17
相关论文
共 50 条
  • [41] Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
    Brunner, Thomas
    Diehl, Frederik
    Le, Michael Truong
    Knoll, Alois
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4957 - 4965
  • [42] Black-box adversarial attacks against image quality assessment models
    Ran, Yu
    Zhang, Ao-Xiang
    Li, Mingjie
    Tang, Weixuan
    Wang, Yuan-Gen
    Expert Systems with Applications, 2025, 260
  • [43] White-box and Black-box Adversarial Attacks to Obstacle Avoidance in Mobile Robots
    Rano, Inaki
    Christensen, Anders Lyhne
    2023 EUROPEAN CONFERENCE ON MOBILE ROBOTS, ECMR, 2023, : 64 - 69
  • [44] AKD: Using Adversarial Knowledge Distillation to Achieve Black-box Attacks
    Lian, Xin
    Huang, Zhiqiu
    Wang, Chao
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [45] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    PATTERN RECOGNITION, 2020, 101
  • [46] AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows
    Dolatabadi, Hadi M.
    Erfani, Sarah
    Leckie, Christopher
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [47] Enhancing Transferability of Black-box Adversarial Attacks via Lifelong Learning for Speech Emotion Recognition Models
    Ren, Zhao
    Han, Jing
    Cummins, Nicholas
    Schuller, Bjoern W.
    INTERSPEECH 2020, 2020, : 496 - 500
  • [48] GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks
    Fan, Xinxin
    Li, Mengfan
    Zhou, Jia
    Jing, Quanliang
    Lin, Chi
    Lu, Yunfeng
    Bi, Jingping
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2038 - 2048
  • [49] Empirical Perturbation Analysis of Two Adversarial Attacks: Black Box versus White Box
    Chitic, Raluca
    Topal, Ali Osman
    Leprevost, Franck
    APPLIED SCIENCES-BASEL, 2022, 12 (14):
  • [50] Query-Efficient Black-Box Adversarial Attacks on Automatic Speech Recognition
    Tong, Chuxuan
    Zheng, Xi
    Li, Jianhua
    Ma, Xingjun
    Gao, Longxiang
    Xiang, Yong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3981 - 3992