Voltage Noise-Based Adversarial Attacks on Machine Learning Inference in Multi-Tenant FPGA Accelerators

被引:0
|
作者
Majumdar, Saikat [1 ]
Teodorescu, Radu [1 ]
机构
[1] Ohio State Univ, Dept Comp Sci & Engn, Columbus, OH 43210 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/HOST55342.2024.10545401
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) classifiers are known to be vulnerable to adversarial attacks, in which a model is induced to misclassify an input into the wrong class. These attacks affect virtually all state-of-the-art DNN models. While most adversarial attacks work by altering the classifier input, recent variants have also targeted the model parameters. This paper focuses on a new attack vector on DNN models that leverages computation errors, rather than memory errors, deliberately introduced during DNN inference to induce misclassification. In particular, it examines errors introduced by voltage noise into FPGA-based accelerators as the attack mechanism. In an advancement over prior work, the paper demonstrates that targeted attacks are possible, even when randomly occurring faults are used. It presents an approach for precisely characterizing the distribution of faults under noise of individual input devices, by examining classification errors in select inputs. It then shows how, by fine-tuning the parameters of the attack (noise levels and target DNN layers) the attacker can produce the desired misclassification class, without altering the original input. We demonstrate the attack on an FPGA device and show the attack success rate ranges between 80% and 99.5% depending on the DNN model and dataset.
引用
收藏
页码:80 / 85
页数:6
相关论文
共 50 条
  • [21] Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
    Standen, Maxwell
    Kim, Junae
    Szabo, Claudia
    ACM COMPUTING SURVEYS, 2025, 57 (05)
  • [22] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529
  • [23] FPGA-based Deep Learning Inference Accelerators: Where Are We Standing?
    Nechi, Anouar
    Groth, Lukas
    Mulhem, Saleh
    Merchant, Farhad
    Buchty, Rainer
    Berekovic, Mladen
    ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2023, 16 (04)
  • [24] Development of a Novel Cloud-based Multi-tenant Model Creation Scheme for Machine Tools
    Lin, Yu-Chuan
    Hung, Min-Hsiung
    Wei, Chun-Fan
    Cheng, Fan-Tien
    2015 INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2015, : 1448 - 1449
  • [25] Octopus: SLO-Aware Progressive Inference Serving via Deep Reinforcement Learning in Multi-tenant Edge Cluster
    Zhang, Ziyang
    Zhao, Yang
    Liu, Jie
    SERVICE-ORIENTED COMPUTING, ICSOC 2023, PT II, 2023, 14420 : 242 - 258
  • [26] Outsourcing Secured Machine Learning (ML)-as-a Service for Causal Impact Analytics in Multi-Tenant Public Cloud
    Hu, Yuh-Jong
    2017 2ND INTERNATIONAL CONFERENCE ON TELECOMMUNICATION AND NETWORKS (TEL-NET), 2017, : 15 - 15
  • [27] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [28] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [29] Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems
    Newaz, A. K. M. Iqtidar
    Haque, Nur Imtiazul
    Sikder, Amit Kumar
    Rahman, Mohammad Ashiqur
    Uluagac, A. Selcuk
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [30] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527