Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring

被引:0
|
作者
Zoppi, Tommaso [1 ]
Ceccarelli, Andrea [1 ]
机构
[1] Univ Firenze, Dipartimento Matemat & Informat, I-50134 Florence, Italy
来源
IEEE ACCESS | 2021年 / 9卷
关键词
Graphics processing units; Data models; Monitoring; Detectors; Software; Runtime; Neurons; Attack detection; anomaly detection; graphics processing unit; deep Neural Networks; adversarial attacks; image classification; MACHINE;
D O I
10.1109/ACCESS.2021.3125920
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the application of DNNs in security-critical tasks: consequently, relevant research effort is put in securing DNNs. Typical approaches either increase model robustness, or add detection capabilities in the model, or operate on the input data. Instead, in this paper we propose to detect ongoing attacks through monitoring performance indicators of the underlying Graphics Processing Unit (GPU). In fact, adversarial attacks generate images that activate neurons of DNNs in a different way than legitimate images. This also causes an alteration of GPU activities, that can be observed through software monitors and anomaly detectors. This paper presents our monitoring and detection system, and an extensive experimental analysis that includes a total of 14 adversarial attacks, 3 datasets, and 12 models. Results show that, despite limitations on the monitoring resolution, adversarial attacks can be detected in most cases, with peaks of detection accuracy above 90%.
引用
收藏
页码:150579 / 150591
页数:13
相关论文
共 50 条
  • [1] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [2] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [3] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    [J]. CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [4] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [5] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [6] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [7] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [8] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [9] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] EFFICIENT RANDOMIZED DEFENSE AGAINST ADVERSARIAL ATTACKS IN DEEP CONVOLUTIONAL NEURAL NETWORKS
    Sheikholeslami, Fatemeh
    Jain, Swayambhoo
    Giannakis, Georgios B.
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3277 - 3281