Using Undervolting as an on-Device Defense Against Adversarial Machine Learning Attacks

被引:4
|
作者
Majumdar, Saikat [1 ]
Samavatian, Mohammad Hossein [1 ]
Barber, Kristin [1 ]
Teodorescu, Radu [1 ]
机构
[1] Ohio State Univ, Dept Comp Sci & Engn, Columbus, OH 43210 USA
基金
美国国家科学基金会;
关键词
undervolting; machine learning; defense;
D O I
10.1109/HOST49136.2021.9702287
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) classifiers are powerful tools that drive a broad spectrum of important applications, from image recognition to autonomous vehicles. Unfortunately, DNNs are known to be vulnerable to adversarial attacks that affect virtually all state-of-the-art models. These attacks make small imperceptible modifications to inputs that are sufficient to induce the DNNs to produce the wrong classification. In this paper we propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers that relies on undervolting (running a chip at a voltage that is slightly below its safe margin). We propose using controlled undervolting of the chip running the inference process in order to introduce a limited number of compute errors. We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial. We evaluate the proposed solution in an FPGA design and through software simulation. We evaluate 10 attacks and show average detection rates of 77% and 90% on two popular DNNs.
引用
收藏
页码:158 / 169
页数:12
相关论文
共 50 条
  • [31] Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
    V. V. Platonov
    N. M. Grigorjeva
    [J]. Automatic Control and Computer Sciences, 2023, 57 : 989 - 995
  • [32] A Defense Method against Poisoning Attacks on IoT Machine Learning Using Poisonous Data
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    [J]. 2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 100 - 107
  • [33] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [34] ASCL: Adversarial supervised contrastive learning for defense against word substitution attacks
    Shi, Jiahui
    Li, Linjing
    Zeng, Daniel
    [J]. NEUROCOMPUTING, 2022, 510 : 59 - 68
  • [35] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    [J]. 2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341
  • [36] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    [J]. 2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [37] Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
    Gurel, Nezihe Merve
    Qi, Xiangyu
    Rimanic, Luka
    Zhang, Ce
    Li, Bo
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [38] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    [J]. IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [39] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    [J]. WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [40] Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems
    Mumcu, Furkan
    Doshi, Keval
    Yilmaz, Yasin
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 205 - 212