Using Honeypots in a Decentralized Framework to Defend Against Adversarial Machine-Learning Attacks

被引:2
|
作者
Younis, Fadi [1 ]
Miri, Ali [1 ]
机构
[1] Ryerson Univ, Dept Comp Sci, Toronto, ON, Canada
关键词
Adversarial machine learning; Deception-as-a-defence; Exploratory attacks; Evasion attacks; High-interaction honeypots; Honey-tokens;
D O I
10.1007/978-3-030-29729-9_2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The market demand for online machine-learning services is increasing, and so have the threats against them. Adversarial inputs represent a new threat to Machine-Learning-as-a-Services (MLaaSs). Meticulously crafted malicious inputs can be used to mislead and confuse the learning model, even in cases where the adversary only has limited access to input and output labels. As a result, there has been an increased interest in defence techniques to combat these types of attacks. In this paper, we propose a network of High-Interaction Honeypots (HIHP) as a decentralized defence framework that prevents an adversary from corrupting the learning model. We accomplish our aim by (1) preventing the attacker from correctly learning the labels and approximating the architecture of the black-box system; (2) luring the attacker away, towards a decoy model, using Adversarial HoneyTokens; and finally (3) creating infeasible computational work for the adversary.
引用
收藏
页码:24 / 48
页数:25
相关论文
共 50 条
  • [21] SpacePhish: The Evasion-space of Adversarial Attacks against PhishingWebsite Detectors using Machine Learning
    Apruzzese, Giovanni
    Conti, Mauro
    Yuan, Ying
    [J]. PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 171 - 185
  • [22] An Autoencoder Based Approach to Defend Against Adversarial Attacks for Autonomous Vehicles
    Gan, Houchao
    Liu, Chen
    [J]. 2020 INTERNATIONAL CONFERENCE ON CONNECTED AND AUTONOMOUS DRIVING (METROCAD 2020), 2020, : 43 - 44
  • [23] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    [J]. SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [24] Enablers Of Adversarial Attacks in Machine Learning
    Izmailov, Rauf
    Sugrim, Shridatt
    Chadha, Ritu
    McDaniel, Patrick
    Swami, Ananthram
    [J]. 2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 425 - 430
  • [25] How to Defend and Secure Deep Learning Models Against Adversarial Attacks in Computer Vision: A Systematic Review
    Dhamija, Lovi
    Bansal, Urvashi
    [J]. New Generation Computing, 2024, 42 (05) : 1165 - 1235
  • [26] Darknet traffic classification and adversarial attacks using machine learning
    Rust-Nguyen, Nhien
    Sharma, Shruti
    Stamp, Mark
    [J]. COMPUTERS & SECURITY, 2023, 127
  • [27] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [28] Defense-Net: Defend Against a Wide Range of Adversarial Attacks through Adversarial Detector
    Rakin, Adnan Siraj
    Fan, Deliang
    [J]. 2019 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2019), 2019, : 333 - 338
  • [29] WASSERTRAIN: AN ADVERSARIAL TRAINING FRAMEWORK AGAINST WASSERSTEIN ADVERSARIAL ATTACKS
    Zhao, Qingye
    Chen, Xin
    Zhao, Zhuoyu
    Tang, Enyi
    Li, Xuandong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2734 - 2738
  • [30] Approach to Detecting Attacks against Machine Learning Systems with a Generative Adversarial Network
    Kotenko, I.V.
    Saenko, I.B.
    Lauta, O.S.
    Vasilev, N.A.
    Sadovnikov, V.E.
    [J]. Pattern Recognition and Image Analysis, 2024, 34 (03) : 589 - 596