Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

被引:4
|
作者
Dai, Zeyu [1 ,2 ]
Liu, Shengcai [3 ]
Li, Qing [4 ]
Tang, Ke [5 ]
机构
[1] Southern Univ Sci & Technol, Res Inst Trustworthy Autonomous Syst, Shenzhen, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[5] Southern Univ Sci & Technol, Res Inst Trustworthy Autonomous Syst, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial example; black-box adversarial attack; saliency map; deep neural networks;
D O I
10.1145/3582563
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are vulnerable to adversarial examples, even in the black-box setting where the attacker is only accessible to the model output. Recent studies have devised effective black-box attacks with high query efficiency. However, such performance is often accompanied by compromises in attack imperceptibility, hindering the practical use of these approaches. In this article, we propose to restrict the perturbations to a small salient region to generate adversarial examples that can hardly be perceived. This approach is readily compatible with many existing black-box attacks and can significantly improve their imperceptibility with little degradation in attack success rates. Furthermore, we propose the Saliency Attack, a newblack-box attack aiming to refine the perturbations in the salient region to achieve even better imperceptibility. Extensive experiments showthat compared to the state-of-the-art black-box attacks, our approach achievesmuch better imperceptibility scores, including most apparent distortion (MAD), L-0 and L-2 distances, and also obtains significantly better true success rate and effective query number judged by a human-like threshold on MAD. Importantly, the perturbations generated by our approach are interpretable to some extent. Finally, it is also demonstrated to be robust to different detection-based defenses.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition
    Zhang, Xingyu
    Zhang, Xiongwei
    Sun, Meng
    Zou, Xia
    Chen, Kejiang
    Yu, Nenghai
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) : 65 - 79
  • [2] Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition
    Xingyu Zhang
    Xiongwei Zhang
    Meng Sun
    Xia Zou
    Kejiang Chen
    Nenghai Yu
    [J]. Complex & Intelligent Systems, 2023, 9 : 65 - 79
  • [3] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [4] Towards Efficient Data Free Black-box Adversarial Attack
    Zhang, Jie
    Li, Bo
    Xu, Jianghe
    Wu, Shuang
    Ding, Shouhong
    Zhang, Lei
    Wu, Chao
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15094 - 15104
  • [5] Amora: Black-box Adversarial Morphing Attack
    Wang, Run
    Juefei-Xu, Felix
    Guo, Qing
    Huang, Yihao
    Xie, Xiaofei
    Ma, Lei
    Liu, Yang
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1376 - 1385
  • [6] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [7] A black-Box adversarial attack for poisoning clustering
    Cina, Antonio Emanuele
    Torcinovich, Alessandro
    Pelillo, Marcello
    [J]. PATTERN RECOGNITION, 2022, 122
  • [8] IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking
    Jia, Shuai
    Song, Yibing
    Ma, Chao
    Yang, Xiaokang
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6705 - 6714
  • [9] SCHMIDT: IMAGE AUGMENTATION FOR BLACK-BOX ADVERSARIAL ATTACK
    Shi, Yucheng
    Han, Yahong
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [10] Generalizable Black-Box Adversarial Attack With Meta Learning
    Yin, Fei
    Zhang, Yong
    Wu, Baoyuan
    Feng, Yan
    Zhang, Jingyi
    Fan, Yanbo
    Yang, Yujiu
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1804 - 1818