Towards a Certification of Deep Image Classifiers against Convolutional Attacks

被引:4
|
作者
Mziou-Sallami, Mallek [1 ,3 ]
Adjed, Faouzi [1 ,2 ]
机构
[1] IRT SystemX, Palaiseau, France
[2] Expleo Grp, Montigny Le Bretonneux, France
[3] CEA, Evry, France
关键词
NN Robustness; Uncertainty in AI; Perception; Abstract Interpretation;
D O I
10.5220/0010870400003116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models do not achieve sufficient confidence, explainability and transparency levels to be integrated into safety-critical systems. In the context of DNN-based image classifier, robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other geometrical perturbations. In this paper, we intend to introduce a new method to certify deep image classifiers against convolutional attacks. Using the abstract interpretation theory, we formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image filtering. We experiment the proposed method on MNIST and CIFAR10 databases and several DNN architectures. The obtained results show that convolutional neural networks are more robust against filtering attacks. Multilayered perceptron robustness decreases when increasing number of neurons and hidden layers. These results prove that the complexity of DNN models improves prediction's accuracy but often impacts robustness.
引用
收藏
页码:419 / 428
页数:10
相关论文
共 50 条
  • [1] Integrative System of Deep Classifiers Certification: Case of Convolutional Attacks
    Smati, Imen
    Khalsi, Rania
    Mziou-Sallami, Mallek
    Adjed, Faouzi
    Ghorbel, Faouzi
    AGENTS AND ARTIFICIAL INTELLIGENCE, ICAART 2022, 2022, 13786 : 99 - 121
  • [2] Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems
    Anastasiou, Theodora
    Karagiorgou, Sophia
    Petrou, Petros
    Papamartzivanos, Dimitrios
    Giannetsos, Thanassis
    Tsirigotaki, Georgia
    Keizer, Jelle
    SENSORS, 2022, 22 (18)
  • [3] A Watermarking-Based Framework for Protecting Deep Image Classifiers Against Adversarial Attacks
    Sun, Chen
    Yang, En-Hui
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3324 - 3333
  • [4] The Role of Class Information in Model Inversion Attacks Against Image Deep Learning Classifiers
    Tian, Zhiyi
    Cui, Lei
    Zhang, Chenhan
    Tan, Shuaishuai
    Yu, Shui
    Tian, Yonghong
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2407 - 2420
  • [5] A NOVEL SYSTEM FOR DEEP CONTOUR CLASSIFIERS CERTIFICATION UNDER FILTERING ATTACKS
    Khalsi, Rania
    Smati, Imen
    Sallami, Mallek Mziou
    Ghorbel, Faouzi
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3561 - 3565
  • [6] Black-box Evolutionary Search for Adversarial Examples against Deep Image Classifiers in Non-Targeted Attacks
    Prochazka, Stepan
    Neruda, Roman
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [7] Deep Image Destruction: Vulnerability of Deep Image-to-Image Models against Adversarial Attacks
    Choi, Jun-Ho
    Zhang, Huan
    Kim, Jun-Hyuk
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1287 - 1293
  • [8] SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers
    Huang, Bingyao
    Ling, Haibin
    2022 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES (VR 2022), 2022, : 534 - 542
  • [9] Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers
    Kim, Brian
    Sagduyu, Yalin E.
    Erpek, Tugba
    Davaslioglu, Kemal
    Ulukus, Sennur
    2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,
  • [10] EFFICIENT RANDOMIZED DEFENSE AGAINST ADVERSARIAL ATTACKS IN DEEP CONVOLUTIONAL NEURAL NETWORKS
    Sheikholeslami, Fatemeh
    Jain, Swayambhoo
    Giannakis, Georgios B.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3277 - 3281