Deceiving supervised machine learning models via adversarial data poisoning attacks: a case study with USB keyboards

被引:3
|
作者
Chillara, Anil Kumar [1 ]
Saxena, Paresh [1 ]
Maiti, Rajib Ranjan [1 ]
Gupta, Manik [1 ]
Kondapalli, Raghu [2 ]
Zhang, Zhichao [2 ]
Kesavan, Krishnakumar [2 ]
机构
[1] BITS Pilani, CSIS Dept, Hyderabad 500078, Telangana, India
[2] Axiado Corp, 2610 Orchard Pkwy,3rd Fl, San Jose, CA 95134 USA
关键词
USB; Adversarial learning; Data poisoning attacks; Keystroke injection attacks; Supervised learning;
D O I
10.1007/s10207-024-00834-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to its plug-and-play functionality and wide device support, the universal serial bus (USB) protocol has become one of the most widely used protocols. However, this widespread adoption has introduced a significant security concern: the implicit trust provided to USB devices, which has created a vast array of attack vectors. Malicious USB devices exploit this trust by disguising themselves as benign peripherals and covertly implanting malicious commands into connected host devices. Existing research employs supervised learning models to identify such malicious devices, but our study reveals a weakness in these models when faced with sophisticated data poisoning attacks. We propose, design and implement a sophisticated adversarial data poisoning attack to demonstrate how these models can be manipulated to misclassify an attack device as a benign device. Our method entails generating keystroke data using a microprogrammable keystroke attack device. We develop adversarial attacker by meticulously analyzing the data distribution of data features generated via USB keyboards from benign users. The initial training data is modified by exploiting firmware-level modifications within the attack device. Upon evaluating the models, our findings reveal a significant decrease from 99 to 53% in detection accuracy when an adversarial attacker is employed. This work highlights the critical need to reevaluate the dependability of machine learning-based USB threat detection mechanisms in the face of increasingly sophisticated attack methods. The vulnerabilities demonstrated highlight the importance of developing more robust and resilient detection strategies to protect against the evolution of malicious USB devices.
引用
收藏
页码:2043 / 2061
页数:19
相关论文
共 50 条
  • [21] Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification
    McCarthy, Andrew
    Ghadafi, Essam
    Andriotis, Panagiotis
    Legg, Phil
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 72
  • [22] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529
  • [23] Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models
    Yang, Ziying
    Zhang, Jie
    Wang, Wei
    Li, Huan
    APPLIED SCIENCES-BASEL, 2024, 14 (19):
  • [24] The robustness of popular multiclass machine learning models against poisoning attacks: Lessons and insights
    Maabreh, Majdi
    Maabreh, Arwa
    Qolomany, Basheer
    Al-Fuqaha, Ala
    INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2022, 18 (07)
  • [25] A Defense Method against Poisoning Attacks on IoT Machine Learning Using Poisonous Data
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 100 - 107
  • [27] A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (02) : 215 - 240
  • [28] Preventing Text Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key
    Jodayree, Mahdee
    He, Wenbo
    Janicki, Ryszard
    ROUGH SETS, IJCRS 2023, 2023, 14481 : 612 - 626
  • [30] Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
    Imran, Muhammad
    Appice, Annalisa
    Malerba, Donato
    FUTURE INTERNET, 2024, 16 (05)