Relative Robustness of Quantized Neural Networks Against Adversarial Attacks

被引:7
|
作者
Duncan, Kirsty [1 ]
Komendantskaya, Ekaterina [1 ]
Stewart, Robert [1 ]
Lones, Michael [1 ]
机构
[1] Heriot Watt Univ, Dept Comp Sci, Edinburgh, Midlothian, Scotland
基金
英国工程与自然科学研究理事会;
关键词
neural network; verification; adversarial attack;
D O I
10.1109/ijcnn48605.2020.9207596
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are increasingly being moved to edge computing devices and smart sensors, to reduce latency and save bandwidth. Neural network compression such as quantization is necessary to fit trained neural networks into these resource constrained devices. At the same time, their use in safety-critical applications raises the need to verify properties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks, leading to "obviously wrong" misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%-99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [2] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [3] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [4] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [5] ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Lim, Hyuntak
    Roh, Si-Dong
    Park, Sangki
    Chung, Ki-Seok
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [6] Bringing robustness against adversarial attacks
    Gean T. Pereira
    André C. P. L. F. de Carvalho
    [J]. Nature Machine Intelligence, 2019, 1 : 499 - 500
  • [7] Bringing robustness against adversarial attacks
    Pereira, Gean T.
    de Carvalho, Andre C. P. L. F.
    [J]. NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 499 - 500
  • [8] On the Robustness of Neural-Enhanced Video Streaming against Adversarial Attacks
    Zhou, Qihua
    Guo, Jingcai
    Guo, Song
    Li, Ruibin
    Zhang, Jie
    Wang, Bingjie
    Xu, Zhenda
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 17123 - 17131
  • [9] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [10] Chaotic neural network quantization and its robustness against adversarial attacks
    Osama, Alaa
    Gadallah, Samar I.
    Said, Lobna A.
    Radwan, Ahmed G.
    Fouda, Mohammed E.
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 286