Query efficient black-box adversarial attack on deep neural networks

被引:20
|
作者
Bai, Yang [1 ,2 ]
Wang, Yisen [2 ,3 ,4 ,6 ]
Zeng, Yuyuan [2 ]
Jiang, Yong [2 ,5 ]
Xia, Shu-Tao [2 ,5 ]
机构
[1] Tencent Secur Zhuque Lab, Shenzhen, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Peking Univ, Sch Intelligence Sci & Technol, Key Lab Machine Percept MoE, Beijing, Peoples R China
[4] Peking Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[5] Peng Cheng Lab, Shenzhen, Peoples R China
[6] Peking Univ, Sch Intelligence Sci & Techno, Key Lab Machine Percept MoE, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Black-box adversarial attack; Adversarial distribution; Query efficiency; Neural process;
D O I
10.1016/j.patcog.2022.109037
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, yet they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting). As plenty of machine learning models have been deployed via online services that only provide query outputs from inaccessible models (e.g., Google Cloud Vision API2), black -box adversarial attacks raise critical security concerns in practice rather than white-box ones. However, existing query-based black-box adversarial attacks often require excessive model queries to maintain a high attack success rate. Therefore, in order to improve query efficiency, we explore the distribution of adversarial examples around benign inputs with the help of image structure information characterized by a Neural Process, and propose a Neural Process based black-box adversarial attack (NP-Attack) in this paper. Our proposed NP-Attack could be further boosted when applied with surrogate models or tiling tricks. Extensive experiments show that NP-Attack could greatly decrease the query counts under the black-box setting.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    [J]. PATTERN RECOGNITION, 2022, 131
  • [2] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    [J]. 2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [3] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    [J]. 2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [4] A CMA-ES-Based Adversarial Attack on Black-Box Deep Neural Networks
    Kuang, Xiaohui
    Liu, Hongyi
    Wang, Ye
    Zhang, Qikun
    Zhang, Quanxin
    Zheng, Jun
    [J]. IEEE ACCESS, 2019, 7 : 172938 - 172947
  • [5] GenDroid: A query-efficient black-box android adversarial attack framework
    Xu, Guangquan
    Shao, Hongfei
    Cui, Jingyi
    Bai, Hongpeng
    Li, Jiliang
    Bai, Guangdong
    Liu, Shaoying
    Meng, Weizhi
    Zheng, Xi
    [J]. COMPUTERS & SECURITY, 2023, 132
  • [6] Query-Efficient Black-Box Adversarial Attack with Random Pattern Noises
    Yuito, Makoto
    Suzuki, Kenta
    Yoneyama, Kazuki
    [J]. INFORMATION AND COMMUNICATIONS SECURITY, ICICS 2022, 2022, 13407 : 303 - 323
  • [7] Query-Efficient Black-Box Adversarial Attack With Customized Iteration and Sampling
    Shi, Yucheng
    Han, Yahong
    Hu, Qinghua
    Yang, Yi
    Tian, Qi
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 2226 - 2245
  • [8] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [9] Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
    Bhagoji, Arjun Nitin
    He, Warren
    Li, Bo
    Song, Dawn
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 158 - 174
  • [10] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97