Relative Robustness of Quantized Neural Networks Against Adversarial Attacks

被引:7
|
作者
Duncan, Kirsty [1 ]
Komendantskaya, Ekaterina [1 ]
Stewart, Robert [1 ]
Lones, Michael [1 ]
机构
[1] Heriot Watt Univ, Dept Comp Sci, Edinburgh, Midlothian, Scotland
基金
英国工程与自然科学研究理事会;
关键词
neural network; verification; adversarial attack;
D O I
10.1109/ijcnn48605.2020.9207596
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are increasingly being moved to edge computing devices and smart sensors, to reduce latency and save bandwidth. Neural network compression such as quantization is necessary to fit trained neural networks into these resource constrained devices. At the same time, their use in safety-critical applications raises the need to verify properties of neural networks. Adversarial perturbations have potential to be used as an attack mechanism on neural networks, leading to "obviously wrong" misclassification. SMT solvers have been proposed to formally prove robustness guarantees against such adversarial perturbations. We investigate how well these robustness guarantees are preserved when the precision of a neural network is quantized. We also evaluate how effectively adversarial attacks transfer to quantized neural networks. Our results show that quantized neural networks are generally robust relative to their full precision counterpart (98.6%-99.7%), and the transfer of adversarial attacks decreases to as low as 52.05% when the subtlety of perturbation increases. These results show that quantization introduces resilience against transfer of adversarial attacks whilst causing negligible loss of robustness.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Comparison of the Resilience of Convolutional and Cellular Neural Networks Against Adversarial Attacks
    Horvath, Andras
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 2348 - 2352
  • [32] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [33] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [34] Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks
    Wang, Haibo
    Zhou, Chuan
    Chen, Xin
    Wu, Jia
    Pan, Shirui
    Li, Zhao
    Wang, Jilong
    Yu, Philip S.
    [J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36 (11) : 6344 - 6357
  • [35] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [36] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [37] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    [J]. IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [38] Robust convolutional neural networks against adversarial attacks on medical images
    Shi, Xiaoshuang
    Peng, Yifan
    Chen, Qingyu
    Keenan, Tiarnan
    Thavikulwat, Alisa T.
    Lee, Sungwon
    Tang, Yuxing
    Chew, Emily Y.
    Summers, Ronald M.
    Lu, Zhiyong
    [J]. PATTERN RECOGNITION, 2022, 132
  • [39] Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
    Luo, Bo
    Liu, Yannan
    Wei, Lingxiao
    Xu, Qiang
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1652 - 1659
  • [40] Centered-Ranking Learning Against Adversarial Attacks in Neural Networks
    Appiah, Benjamin
    Adu, Adolph S. Y.
    Osei, Isaac
    Assamah, Gabriel
    Hammond, Ebenezer N. A.
    [J]. International Journal of Network Security, 2023, 25 (05) : 814 - 820