Robustness of Adversarial Images Against Filters

被引:0
|
作者
Chitic, Raluca [1 ]
Deridder, Nathan [1 ]
Leprevost, Franck [1 ]
Bernard, Nicolas [2 ]
机构
[1] Univ Luxembourg, House Numbers,6 Ave Fonte, L-4364 Esch Sur Alzette, Luxembourg
[2] La Fraze, 1288 Chemin la Fraze, F-88380 Arches, France
来源
关键词
D O I
10.1007/978-3-030-85672-4_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article addresses the robustness issue of adversarial images against filters. Given an image A, that a convolutional neural network and a human both classify as belonging to a category c(A), one considers an adversarial image D that the neural network classifies in a category c(t) not equal c(A), although a human would not notice any difference between D and A. Would the application of a filter F (such as the Gaussian blur filter) to D still lead to an adversarial image F(D) that fools the neural network? To address this issue, we perform a study on VGG-16 trained on CIFAR-10, with adversarial images obtained thanks to an evolutionary algorithm run on a specific image A taken in one category of CIFAR-10. Exposed to 4 individual filters, we show that the outputted filtered adversarial images essentially do remain adversarial in some sense. We also show that combining filters may render our EA attack less effective. We therefore design a new evolutionary algorithm, whose aim is to create adversarial images that do pass the filter test, do fool VGG-16 and do remain close enough to A that a human would not notice any difference. We show that this is indeed the case by running this new algorithm on the same image A.
引用
收藏
页码:101 / 114
页数:14
相关论文
共 50 条
  • [1] FoolChecker: A platform to evaluate the robustness of images against adversarial attacks
    Liu Hui
    Zhao Bo
    Huang Linquan
    Guo Jiabao
    Liu Yifan
    [J]. NEUROCOMPUTING, 2020, 412 : 216 - 225
  • [2] Adversarial Robustness via Random Projection Filters
    Dong, Minjing
    Xu, Chang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4077 - 4086
  • [3] Adversarial Robustness through the Lens of Convolutional Filters
    Gavrikov, Paul
    Keuper, Janis
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 138 - 146
  • [4] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [5] Bringing robustness against adversarial attacks
    Gean T. Pereira
    André C. P. L. F. de Carvalho
    [J]. Nature Machine Intelligence, 2019, 1 : 499 - 500
  • [6] Bringing robustness against adversarial attacks
    Pereira, Gean T.
    de Carvalho, Andre C. P. L. F.
    [J]. NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 499 - 500
  • [7] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    [J]. NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [8] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [9] Robustness Against Adversarial Attacks Using Dimensionality
    Chattopadhyay, Nandish
    Chatterjee, Subhrojyoti
    Chattopadhyay, Anupam
    [J]. SECURITY, PRIVACY, AND APPLIED CRYPTOGRAPHY ENGINEERING, SPACE 2021, 2022, 13162 : 226 - 241
  • [10] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989