FoolChecker: A platform to evaluate the robustness of images against adversarial attacks

被引:5
|
作者
Liu Hui [1 ]
Zhao Bo [1 ]
Huang Linquan [1 ,2 ]
Guo Jiabao [1 ]
Liu Yifan [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Hubei, Peoples R China
[2] Wuhan Vocat Coll Software & Engn, Informat Sch, Wuhan 430205, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural network; Adversarial examples; Non-robust features; Differential evolution; Greedy algorithm;
D O I
10.1016/j.neucom.2020.05.062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural network (DNN) is inherently vulnerable to well-designed input samples called adversarial examples, which can easily alter the output of the DNN by adding slight perturbations to the input. The recent study proved that adversarial vulnerability is caused by non-robust features and is not inherently tied to DNN. The paper presents a platform called FoolChecker to evaluate the image robustness against adversarial attacks from the perspective of image itself rather than DNN models. We define the minimum perceptual distance between the original examples and the adversarial ones to quantify the robustness against adversarial attacks. Firstly, differential evolution is applied to generate candidate perturbation units with high perturbation priority. And then, the greedy algorithm tries to add the pixel with the current highest perturbation priority into perturbation units until the DNN model is fooled. Finally, the perceptual distance of perturbation units is calculated as a index to evaluate the robustness of images against adversarial attacks. Experimental results show that the FoolChecker can give proper evaluation of the robustness of images against adversarial attacks with acceptable time. (c) 2020 Published by Elsevier B.V.
引用
收藏
页码:216 / 225
页数:10
相关论文
共 50 条
  • [1] Bringing robustness against adversarial attacks
    Gean T. Pereira
    André C. P. L. F. de Carvalho
    [J]. Nature Machine Intelligence, 2019, 1 : 499 - 500
  • [2] Bringing robustness against adversarial attacks
    Pereira, Gean T.
    de Carvalho, Andre C. P. L. F.
    [J]. NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 499 - 500
  • [3] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    [J]. NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [4] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [5] Robustness Against Adversarial Attacks Using Dimensionality
    Chattopadhyay, Nandish
    Chatterjee, Subhrojyoti
    Chattopadhyay, Anupam
    [J]. SECURITY, PRIVACY, AND APPLIED CRYPTOGRAPHY ENGINEERING, SPACE 2021, 2022, 13162 : 226 - 241
  • [6] Robustness of Adversarial Images Against Filters
    Chitic, Raluca
    Deridder, Nathan
    Leprevost, Franck
    Bernard, Nicolas
    [J]. OPTIMIZATION AND LEARNING, OLA 2021, 2021, 1443 : 101 - 114
  • [7] Training on Foveated Images Improves Robustness to Adversarial Attacks
    Shah, Muhammad A.
    Kashaf, Aqsa
    Raj, Bhiksha
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989
  • [9] On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
    Roy, Deboleena
    Chakraborty, Indranil
    Ibrayev, Timur
    Roy, Kaushik
    [J]. 2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 565 - 570
  • [10] Protecting JPEG Images Against Adversarial Attacks
    Prakash, Aaditya
    Moran, Nick
    Garber, Solomon
    DiLillo, Antonella
    Storer, James
    [J]. 2018 DATA COMPRESSION CONFERENCE (DCC 2018), 2018, : 137 - 146