Towards a Certification of Deep Image Classifiers against Convolutional Attacks

被引:4
|
作者
Mziou-Sallami, Mallek [1 ,3 ]
Adjed, Faouzi [1 ,2 ]
机构
[1] IRT SystemX, Palaiseau, France
[2] Expleo Grp, Montigny Le Bretonneux, France
[3] CEA, Evry, France
关键词
NN Robustness; Uncertainty in AI; Perception; Abstract Interpretation;
D O I
10.5220/0010870400003116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models do not achieve sufficient confidence, explainability and transparency levels to be integrated into safety-critical systems. In the context of DNN-based image classifier, robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other geometrical perturbations. In this paper, we intend to introduce a new method to certify deep image classifiers against convolutional attacks. Using the abstract interpretation theory, we formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image filtering. We experiment the proposed method on MNIST and CIFAR10 databases and several DNN architectures. The obtained results show that convolutional neural networks are more robust against filtering attacks. Multilayered perceptron robustness decreases when increasing number of neurons and hidden layers. These results prove that the complexity of DNN models improves prediction's accuracy but often impacts robustness.
引用
收藏
页码:419 / 428
页数:10
相关论文
共 50 条
  • [21] Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks
    Alotaibi, Afnan
    Rassam, Murad A.
    SUSTAINABILITY, 2023, 15 (12)
  • [22] Defending Against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit Plane Slicing
    Liu, Yuan
    Zhou, Pingqiang
    PROCEEDINGS OF THE 2020 ASIAN HARDWARE ORIENTED SECURITY AND TRUST SYMPOSIUM (ASIANHOST), 2020,
  • [23] Towards Lightweight Black-Box Attacks Against Deep Neural Networks
    Sun, Chenghao
    Zhang, Yonggang
    Wan Chaoqun
    Wang, Qizhou
    Li, Ya
    Liu, Tongliang
    Han, Bo
    Tian, Xinmei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [24] Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger
    Yu, Yi
    Wang, Yufei
    Yang, Wenhan
    Lu, Shijian
    Tan, Yap-Peng
    Kot, Alex C.
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12250 - 12259
  • [25] Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks
    Choi, Jun-Ho
    Zhang, Huan
    Kim, Jun-Hyuk
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 303 - 311
  • [26] Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers
    Joshi, Ameya
    Mukherjee, Amitangshu
    Sarkar, Soumik
    Hegde, Chinmay
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4772 - 4782
  • [27] Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers
    Truong, Loc
    Jones, Chace
    Hutchinson, Brian
    August, Andrew
    Praggastis, Brenda
    Jasper, Robert
    Nichols, Nicole
    Tuor, Aaron
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3422 - 3431
  • [28] Assessing Adaptive Attacks Against Trained Java']JavaScript Classifiers
    Hansen, Niels
    De Carli, Lorenzo
    Davidson, Drew
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS (SECURECOMM 2020), PT I, 2020, 335 : 190 - 210
  • [29] Defending against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit-plane Slicing
    Liu, Yuan
    Dong, Jinxin
    Zhou, Pingqiang
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2022, 18 (03)
  • [30] Image classifiers and image deep learning classifiers evolved in detection of Oryza sativa diseases: survey
    Goluguri, N. V. Raja Reddy
    Suganya Devi, K.
    Vadaparthi, Nagesh
    ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (01) : 359 - 396