Towards a Certification of Deep Image Classifiers against Convolutional Attacks

被引:4
|
作者
Mziou-Sallami, Mallek [1 ,3 ]
Adjed, Faouzi [1 ,2 ]
机构
[1] IRT SystemX, Palaiseau, France
[2] Expleo Grp, Montigny Le Bretonneux, France
[3] CEA, Evry, France
关键词
NN Robustness; Uncertainty in AI; Perception; Abstract Interpretation;
D O I
10.5220/0010870400003116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models do not achieve sufficient confidence, explainability and transparency levels to be integrated into safety-critical systems. In the context of DNN-based image classifier, robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other geometrical perturbations. In this paper, we intend to introduce a new method to certify deep image classifiers against convolutional attacks. Using the abstract interpretation theory, we formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image filtering. We experiment the proposed method on MNIST and CIFAR10 databases and several DNN architectures. The obtained results show that convolutional neural networks are more robust against filtering attacks. Multilayered perceptron robustness decreases when increasing number of neurons and hidden layers. These results prove that the complexity of DNN models improves prediction's accuracy but often impacts robustness.
引用
收藏
页码:419 / 428
页数:10
相关论文
共 50 条
  • [41] Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior
    Yu, Yi
    Wang, Yufei
    Yang, Wenhan
    Guo, Lanqing
    Lu, Shijian
    Duan, Ling-Yu
    Tan, Yap-Peng
    Kot, Alex C.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (03) : 1674 - 1693
  • [42] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [43] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [44] Black box phase-based adversarial attacks on image classifiers
    Hodes, Scott G.
    Blose, Kory J.
    Kane, Timothy J.
    JOURNAL OF ELECTRONIC IMAGING, 2025, 34 (01)
  • [45] Black box phase-based adversarial attacks on image classifiers
    Hodes, Scott G.
    Blose, Kory J.
    Kane, Timothy J.
    AUTOMATIC TARGET RECOGNITION XXXIV, 2024, 13039
  • [46] WHEN NOT TO CLASSIFY: DETECTION OF REVERSE ENGINEERING ATTACKS ON DNN IMAGE CLASSIFIERS
    Wang, Yujia
    Miller, David J.
    Kesidis, George
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 8063 - 8066
  • [47] Deep convolutional features for image retrieval
    Gkelios, Socratis
    Sophokleous, Aphrodite
    Plakias, Spiros
    Boutalis, Yiannis
    Chatzichristofis, Savvas A.
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 177
  • [48] Combinatorial Boosting of Ensembles of Diversified Classifiers for Defense Against Evasion Attacks
    Izmailov, Rauf
    Lin, Peter
    Venkatesan, Sridhar
    Sugrim, Shridatt
    2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,
  • [49] HAHANet: Towards Accurate Image Classifiers with Less Parameters
    Antioquia, Arren Matthew C.
    II, Macario O. Cordel
    IMAGE AND VIDEO TECHNOLOGY, PSIVT 2023, 2024, 14403 : 246 - 258
  • [50] It is double pleasure to deceive the deceiver: disturbing classifiers against adversarial attacks
    Zago, Joao G.
    Antonelo, Eric A.
    Baldissera, Fabio L.
    Saad, Rodrigo T.
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 160 - 165