Defeating deep learning based de-anonymization attacks with adversarial example

被引:0
|
作者
Yin, Haoyu [1 ]
Liu, Yingjian [1 ]
Li, Yue [1 ]
Guo, Zhongwen [1 ]
Wang, Yu [2 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Shandong, Peoples R China
[2] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Website fingerprinting; Adversarial example; Privacy; Deep learning; Anonymity;
D O I
10.1016/j.jnca.2023.103733
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning (DL) technologies bring new threats to network security. Website fingerprinting attacks (WFA) using DL models can distinguish victim's browsing activities protected by anonymity technologies. Unfortunately, traditional countermeasures (website fingerprinting defenses, WFD) fail to preserve privacy against DL models. In this paper, we apply adversarial example technology to implement new WFD with static analyzing (SA) and dynamic perturbation (DP) settings. Although DP setting is close to a real-world scenario, its supervisions are almost unavailable due to the uncertainty of upcoming traffics and the difficulty of dependency analysis over time. SA setting relaxes the real-time constraints in order to implement WFD under a supervised learning perspective. We propose Greedy Injection Attack (GIA), a novel adversarial method for WFD under SA setting based on zero-injection vulnerability test. Furthermore, Sniper is proposed to mitigate the computational cost by using a DL model to approximate zero-injection test. FCNSniper and RNNSniper are designed for SA and DP settings respectively. Experiments show that FCNSniper decreases classification accuracy of the state-of-the-art WFA model by 96.57% with only 2.29% bandwidth overhead. The learned knowledge can be efficiently transferred into RNNSniper. As an indirect adversarial example attack approach, FCNSniper can be well generalized to different target WFA models and datasets without suffering fatal failures from adversarial training.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] De-anonymization Attacks on Neuroimaging Datasets
    Ravindra, Vikram
    Grama, Ananth
    SIGMOD '21: PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2021, : 2394 - 2398
  • [2] A Technique to Improve De-anonymization Attacks on Graph Data
    Aliakbari, Javad
    Delavar, Mahshid
    Mohajeri, Javad
    Salmasizadeh, Mahmoud
    26TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE 2018), 2018, : 704 - 709
  • [3] Graph Data Anonymization, De-Anonymization Attacks, and De-Anonymizability Quantification: A Survey
    Ji, Shouling
    Mittal, Prateek
    Beyah, Raheem
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (02): : 1305 - 1326
  • [4] Theoretical Results on De-Anonymization via Linkage Attacks
    Merener, Martin M.
    TRANSACTIONS ON DATA PRIVACY, 2012, 5 (02) : 377 - 402
  • [5] XSS adversarial example attacks based on deep reinforcement learning
    Chen, Li
    Tang, Cong
    He, Junjiang
    Zhao, Hui
    Lan, Xiaolong
    Li, Tao
    COMPUTERS & SECURITY, 2022, 120
  • [6] Using machine learning techniques for de-anonymization
    Gulyas Gabor Gyorgy
    INFORMACIOS TARSADALOM, 2017, 17 (01): : 72 - +
  • [7] IDEAL: An Interactive De-Anonymization Learning System
    Li, Na
    Murugesan, Rajkumar
    Li, Lin
    Zheng, Hao
    2020 IEEE 44TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2020), 2020, : 449 - 454
  • [8] Analysis of De-anonymization Attacks on Social Networks with Identity Separation
    Gulyas, Gabor Gy.
    Imre, Sander
    INFOCOMMUNICATIONS JOURNAL, 2011, 3 (04): : 11 - 20
  • [9] ASePPI: Robust Privacy Protection against De-Anonymization attacks
    Ruchaud, Natacha
    Dugelay, Jean-Luc
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1352 - 1359
  • [10] Safe Machine Learning and Defeating Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javidi, Tara
    Koushanfar, Farinaz
    IEEE SECURITY & PRIVACY, 2019, 17 (02) : 31 - 38