Black-box adversarial attacks by manipulating image attributes

被引:19
|
作者
Wei, Xingxing [1 ]
Guo, Ying [1 ]
Li, Bo [1 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Adversarial attack; Adversarial attributes; Black-box setting;
D O I
10.1016/j.ins.2020.10.028
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although there exist various adversarial attacking methods, most of them are performed by generating adversarial noises. Inspired by the fact that people usually set different camera parameters to obtain diverse visual styles when taking a picture, we propose the adversarial attributes, which generate adversarial examples by manipulating the image attributes like brightness, contrast, sharpness, chroma to simulate the imaging process. This task is accomplished under the black-box setting, where only the predicted probabilities are known. We formulate this process into an optimization problem. After efficiently solving this problem, the optimal adversarial attributes are obtained with limited queries. To guarantee the realistic effect of adversarial examples, we bound the attribute changes using L-p norm versus different p values. Besides, we also give a formal explanation for the adversarial attributes based on the linear nature of Deep Neural Networks (DNNs). Extensive experiments are conducted on two public datasets, including CIFAR-10 and ImageNet with respective to four representative DNNs like VGG16, AlexNet, Inception v3 and Resnet50. The results show that at most 97.79% of images in CIFAR-10 test dataset and 98:01% of the ImageNet images can be successfully perturbed to at least one wrong class with only <= 300 queries per image on average. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页码:285 / 296
页数:12
相关论文
共 50 条
  • [1] Black-box adversarial attacks by manipulating image attributes
    Wei, Xingxing
    Guo, Ying
    Li, Bo
    Information Sciences, 2021, 550 : 285 - 296
  • [2] A review of black-box adversarial attacks on image classification
    Zhu, Yanfei
    Zhao, Yaochi
    Hu, Zhuhua
    Luo, Tan
    He, Like
    NEUROCOMPUTING, 2024, 610
  • [3] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [4] Black-box adversarial attacks against image quality assessment models
    Ran, Yu
    Zhang, Ao-Xiang
    Li, Mingjie
    Tang, Weixuan
    Wang, Yuan-Gen
    Expert Systems with Applications, 2025, 260
  • [5] Resiliency of SNN on Black-Box Adversarial Attacks
    Paudel, Bijay Raj
    Itani, Aashish
    Tragoudas, Spyros
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 799 - 806
  • [6] Black-box Adversarial Attacks in Autonomous Vehicle Technology
    Kumar, K. Naveen
    Vishnu, C.
    Mitra, Reshmi
    Mohan, C. Krishna
    2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR): TRUSTED COMPUTING, PRIVACY, AND SECURING MULTIMEDIA, 2020,
  • [7] Black-box Adversarial Attacks on Video Recognition Models
    Jiang, Linxi
    Ma, Xingjun
    Chen, Shaoxiang
    Bailey, James
    Jiang, Yu-Gang
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 864 - 872
  • [8] Physical Black-Box Adversarial Attacks Through Transformations
    Jiang, Wenbo
    Li, Hongwei
    Xu, Guowen
    Zhang, Tianwei
    Lu, Rongxing
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (03) : 964 - 974
  • [9] Boosting Black-Box Adversarial Attacks with Meta Learning
    Fu, Junjie
    Sun, Jian
    Wang, Gang
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7308 - 7313
  • [10] Curls & Whey: Boosting Black-Box Adversarial Attacks
    Shi, Yucheng
    Wang, Siyu
    Han, Yahong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6512 - 6520