Deceiving supervised machine learning models via adversarial data poisoning attacks: a case study with USB keyboards

被引:3
|
作者
Chillara, Anil Kumar [1 ]
Saxena, Paresh [1 ]
Maiti, Rajib Ranjan [1 ]
Gupta, Manik [1 ]
Kondapalli, Raghu [2 ]
Zhang, Zhichao [2 ]
Kesavan, Krishnakumar [2 ]
机构
[1] BITS Pilani, CSIS Dept, Hyderabad 500078, Telangana, India
[2] Axiado Corp, 2610 Orchard Pkwy,3rd Fl, San Jose, CA 95134 USA
关键词
USB; Adversarial learning; Data poisoning attacks; Keystroke injection attacks; Supervised learning;
D O I
10.1007/s10207-024-00834-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to its plug-and-play functionality and wide device support, the universal serial bus (USB) protocol has become one of the most widely used protocols. However, this widespread adoption has introduced a significant security concern: the implicit trust provided to USB devices, which has created a vast array of attack vectors. Malicious USB devices exploit this trust by disguising themselves as benign peripherals and covertly implanting malicious commands into connected host devices. Existing research employs supervised learning models to identify such malicious devices, but our study reveals a weakness in these models when faced with sophisticated data poisoning attacks. We propose, design and implement a sophisticated adversarial data poisoning attack to demonstrate how these models can be manipulated to misclassify an attack device as a benign device. Our method entails generating keystroke data using a microprogrammable keystroke attack device. We develop adversarial attacker by meticulously analyzing the data distribution of data features generated via USB keyboards from benign users. The initial training data is modified by exploiting firmware-level modifications within the attack device. Upon evaluating the models, our findings reveal a significant decrease from 99 to 53% in detection accuracy when an adversarial attacker is employed. This work highlights the critical need to reevaluate the dependability of machine learning-based USB threat detection mechanisms in the face of increasingly sophisticated attack methods. The vulnerabilities demonstrated highlight the importance of developing more robust and resilient detection strategies to protect against the evolution of malicious USB devices.
引用
收藏
页码:2043 / 2061
页数:19
相关论文
共 50 条
  • [31] A Detailed Study on Adversarial attacks and Defense Mechanisms on various Deep Learning Models
    Priya, K., V
    Dinesh, Peter J.
    2023 ADVANCED COMPUTING AND COMMUNICATION TECHNOLOGIES FOR HIGH PERFORMANCE APPLICATIONS, ACCTHPA, 2023,
  • [32] A Sensitivity Analysis of Poisoning and Evasion Attacks in Network Intrusion Detection System Machine Learning Models
    Talty, Kevin
    Stockdale, John
    Bastian, Nathaniel D.
    2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,
  • [33] Instance-based Supervised Machine Learning Models for Detecting GPS Spoofing Attacks on UAS
    Aissou, Ghilas
    Benouadah, Selma
    El Alami, Hassan
    Kaabouch, Naima
    2022 IEEE 12TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2022, : 208 - 214
  • [34] Tree-based Supervised Machine Learning Models For Detecting GPS Spoofing Attacks on UAS
    Aissou, Ghilas
    Slimane, Hadjar Ould
    Benouadah, Selma
    Kaabouch, Naima
    2021 IEEE 12TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2021, : 649 - 653
  • [35] Analysis of deceptive data attacks with adversarial machine learning for solar photovoltaic power generation forecasting
    Kuzlu, Murat
    Sarp, Salih
    Catak, Ferhat Ozgur
    Cali, Umit
    Zhao, Yanxiao
    Elma, Onur
    Guler, Ozgur
    ELECTRICAL ENGINEERING, 2024, 106 (02) : 1815 - 1823
  • [36] Analysis of deceptive data attacks with adversarial machine learning for solar photovoltaic power generation forecasting
    Murat Kuzlu
    Salih Sarp
    Ferhat Ozgur Catak
    Umit Cali
    Yanxiao Zhao
    Onur Elma
    Ozgur Guler
    Electrical Engineering, 2024, 106 : 1815 - 1823
  • [37] Federated Machine Learning in Medical imaging and against Adversarial Attacks: A retrospective multicohort study
    Teo, Zhen Ling
    Zhang, Xiaoman
    Tan, Ting Fang
    Ravichandran, Narrendar
    Yong, Liu
    Ting, Daniel S. W.
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [38] A Comparative Study on the Impact of Adversarial Machine Learning Attacks on Contemporary Intrusion Detection Datasets
    Pujari M.
    Pacheco Y.
    Cherukuri B.
    Sun W.
    SN Computer Science, 3 (5)
  • [39] Membership Inference Attacks Against Machine Learning Models via Prediction Sensitivity
    Liu, Lan
    Wang, Yi
    Liu, Gaoyang
    Peng, Kai
    Wang, Chen
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 2341 - 2347
  • [40] Accelerating Global Sensitivity Analysis via Supervised Machine Learning Tools: Case Studies for Mineral Processing Models
    Lucay, Freddy A.
    MINERALS, 2022, 12 (06)