Simple Black-box Adversarial Attacks

被引:0
|
作者
Guo, Chuan [1 ]
Gardner, Jacob R. [2 ]
You, Yurong [1 ]
Wilson, Andrew Gordon [1 ]
Weinberger, Kilian Q. [1 ]
机构
[1] Cornell Univ, Dept Comp Sci, Ithaca, NY 14853 USA
[2] Uber AI Labs, San Francisco, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an intriguingly simple method for the construction of adversarial images in the black-box setting. In constrast to the white-box scenario, constructing black-box adversarial images has the additional constraint on query budget, and efficient attacks remain an open problem to date. With only the mild assumption of continuous-valued confidence scores, our highly query-efficient algorithm utilizes the following simple iterative principle: we randomly sample a vector from a predefined orthonormal basis and either add or subtract it to the target image. Despite its simplicity, the proposed method can be used for both untargeted and targeted attacks - resulting in previously unprecedented query efficiency in both settings. We demonstrate the efficacy and efficiency of our algorithm on several real world settings including the Google Cloud Vision API. We argue that our proposed algorithm should serve as a strong baseline for future blackbox attacks, in particular because it is extremely fast and its implementation requires less than 20 lines of PyTorch code.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information
    Zhene, Baolin
    Jiang, Peipei
    Wang, Qian
    Li, Qi
    Shen, Chao
    Wang, Cong
    Ge, Yunjie
    Teng, Qingyang
    Zhang, Shenyi
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 86 - 107
  • [32] Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation
    Aithal, Manjushree B.
    Li, Xiaohua
    IEEE ACCESS, 2022, 10 : 12395 - 12411
  • [33] Simultaneously Optimizing Perturbations and Positions for Black-Box Adversarial Patch Attacks
    Wei, Xingxing
    Guo, Ying
    Yu, Jie
    Zhang, Bo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 9041 - 9054
  • [34] Improving Black-box Adversarial Attacks with a Transfer-based Prior
    Cheng, Shuyu
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [35] Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
    Brunner, Thomas
    Diehl, Frederik
    Le, Michael Truong
    Knoll, Alois
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4957 - 4965
  • [36] Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
    Moon, Seungyong
    An, Gaon
    Song, Hyun Oh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [37] Black-box adversarial attacks against image quality assessment models
    Ran, Yu
    Zhang, Ao-Xiang
    Li, Mingjie
    Tang, Weixuan
    Wang, Yuan-Gen
    Expert Systems with Applications, 2025, 260
  • [38] White-box and Black-box Adversarial Attacks to Obstacle Avoidance in Mobile Robots
    Rano, Inaki
    Christensen, Anders Lyhne
    2023 EUROPEAN CONFERENCE ON MOBILE ROBOTS, ECMR, 2023, : 64 - 69
  • [39] AKD: Using Adversarial Knowledge Distillation to Achieve Black-box Attacks
    Lian, Xin
    Huang, Zhiqiu
    Wang, Chao
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [40] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    PATTERN RECOGNITION, 2020, 101