Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples

被引:0
|
作者
Zhang, Ziwei [1 ]
Ye, Dengpan [1 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1155/2022/2962318
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Tor is vulnerable to flow correlation attacks, adversaries who can observe the traffic metadata (e.g., packet timing, size, etc.) between client to entry relay and exit relay to the server will deanonymize users by calculating the degree of association. A recent study has shown that deep-learning-based approach called DeepCorr provides a high flow correlation accuracy of over 96%. The escalating threat of this attack requires timely and effective countermeasures. In this paper, we propose a novel defense mechanism that injects dummy packets into flow traces by precomputing adversarial examples, successfully breaks the flow pattern that CNNs model has learned, and achieves a high protection success rate of over 97%. Moreover, our defense only requires 20% bandwidth overhead, which outperforms the state-of-the-art defense. We further consider implementing our defense in the real world. We find that, unlike traditional scenarios, the traffic flows are "fixed" only when they are coming, which means we must know the next packet's feature. In addition, the websites are not immutable, and the characteristics of the transmitted packets will change irregularly and lead to the inefficiency of adversarial samples. To solve these problems, we design a system to adapt our defense in the real world and further reduce bandwidth overhead.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks With Adversarial Traces
    Rahman, Mohammad Saidur
    Imani, Mohsen
    Mathews, Nate
    Wright, Matthew
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1594 - 1609
  • [2] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    [J]. INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [3] Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks
    Alrawashdeh, Khaled
    Goldsmith, Stephen
    [J]. PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY (ISTAS), 2021, : 294 - 301
  • [4] Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior
    Yoo, Hwajung
    Hong, Pyo Min
    Kim, Taeyong
    Yoon, Jung Won
    Lee, Youn Kyu
    [J]. IEEE ACCESS, 2023, 11 : 78713 - 78725
  • [5] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [6] Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks
    Alotaibi, Afnan
    Rassam, Murad A.
    [J]. SUSTAINABILITY, 2023, 15 (12)
  • [7] Adversarial Examples: Attacks and Defenses for Deep Learning
    Yu, Xiaoyong
    He, Pan
    Zhu, Qile
    Li, Xiaolin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2805 - 2824
  • [8] Adversarial attacks on deep-learning-based SAR image target recognition
    Huang, Teng
    Zhang, Qixiang
    Liu, Jiabao
    Hou, Ruitao
    Wang, Xianmin
    Li, Ya
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2020, 162
  • [9] Adversarial Attacks and Defenses for Deep-Learning-Based Unmanned Aerial Vehicles
    Tian, Jiwei
    Wang, Buhong
    Guo, Rongxiao
    Wang, Zhen
    Cao, Kunrui
    Wang, Xiaodong
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (22) : 22399 - 22409
  • [10] Cascaded Defending and Detecting of Adversarial Attacks Against Deep Learning System in Ophthalmic Imaging
    Ng, Wei Yan
    Xu, Yanyu
    Xu, Xinxing
    Ting, Daniel S. W.
    [J]. INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)