Defeating deep learning based de-anonymization attacks with adversarial example

被引:0
|
作者
Yin, Haoyu [1 ]
Liu, Yingjian [1 ]
Li, Yue [1 ]
Guo, Zhongwen [1 ]
Wang, Yu [2 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Shandong, Peoples R China
[2] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Website fingerprinting; Adversarial example; Privacy; Deep learning; Anonymity;
D O I
10.1016/j.jnca.2023.103733
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning (DL) technologies bring new threats to network security. Website fingerprinting attacks (WFA) using DL models can distinguish victim's browsing activities protected by anonymity technologies. Unfortunately, traditional countermeasures (website fingerprinting defenses, WFD) fail to preserve privacy against DL models. In this paper, we apply adversarial example technology to implement new WFD with static analyzing (SA) and dynamic perturbation (DP) settings. Although DP setting is close to a real-world scenario, its supervisions are almost unavailable due to the uncertainty of upcoming traffics and the difficulty of dependency analysis over time. SA setting relaxes the real-time constraints in order to implement WFD under a supervised learning perspective. We propose Greedy Injection Attack (GIA), a novel adversarial method for WFD under SA setting based on zero-injection vulnerability test. Furthermore, Sniper is proposed to mitigate the computational cost by using a DL model to approximate zero-injection test. FCNSniper and RNNSniper are designed for SA and DP settings respectively. Experiments show that FCNSniper decreases classification accuracy of the state-of-the-art WFA model by 96.57% with only 2.29% bandwidth overhead. The learned knowledge can be efficiently transferred into RNNSniper. As an indirect adversarial example attack approach, FCNSniper can be well generalized to different target WFA models and datasets without suffering fatal failures from adversarial training.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Adversarial Attacks on SDN-Based Deep Learning IDS System
    Huang, Chi-Hsuan
    Lee, Tsung-Han
    Chang, Lin-Huang
    Lin, Jhih-Ren
    Horng, Gwoboa
    MOBILE AND WIRELESS TECHNOLOGY 2018, ICMWT 2018, 2019, 513 : 181 - 191
  • [42] Impacts of Artefacts and Adversarial Attacks in Deep Learning based Action Recognition
    Nguyen, Anh H.
    Tran, Huyen T. T.
    Nguyen, Duc V.
    Thang, Truong Cong
    2019 4TH IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - ASIA (IEEE ICCE-ASIA 2019), 2019, : 161 - 164
  • [43] An Information Theoretic Framework for Active De-anonymization in Social Networks Based on Group Memberships
    Shirani, Farhad
    Garg, Siddharth
    Erkip, Elza
    2017 55TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2017, : 470 - 477
  • [44] Understanding structure-based social network de-anonymization techniques via empirical analysis
    Jian Mao
    Wenqian Tian
    Jingbo Jiang
    Zhaoyuan He
    Zhihong Zhou
    Jianwei Liu
    EURASIP Journal on Wireless Communications and Networking, 2018
  • [45] Threat of Adversarial Attacks within Deep Learning: Survey
    Ata-Us-samad
    Singh R.
    Recent Advances in Computer Science and Communications, 2023, 16 (07)
  • [46] A Survey on Adversarial Attacks and Defenses for Deep Reinforcement Learning
    Liu A.-S.
    Guo J.
    Li S.-M.
    Xiao Y.-S.
    Liu X.-L.
    Tao D.-C.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1553 - 1576
  • [47] Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
    Ilahi I.
    Usama M.
    Qadir J.
    Janjua M.U.
    Al-Fuqaha A.
    Hoang D.T.
    Niyato D.
    IEEE Transactions on Artificial Intelligence, 2022, 3 (02): : 90 - 109
  • [48] Understanding adversarial attacks on observations in deep reinforcement learning
    You QIAOBEN
    Chengyang YING
    Xinning ZHOU
    Hang SU
    Jun ZHU
    Bo ZHANG
    Science China(Information Sciences), 2024, 67 (05) : 69 - 83
  • [49] Understanding adversarial attacks on observations in deep reinforcement learning
    You, Qiaoben
    Ying, Chengyang
    Zhou, Xinning
    Su, Hang
    Zhu, Jun
    Zhang, Bo
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (05)
  • [50] Adversarial Attacks on Deep Learning Based Power Allocation in a Massive MIMO Network
    Manoj, B. R.
    Sadeghi, Meysam
    Larsson, Erik G.
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,