Oriole: Thwarting Privacy Against Trustworthy Deep Learning Models

被引:0
|
作者
Chen, Liuqiao [1 ]
Wang, Hu [2 ]
Zhao, Benjamin Zi Hao [3 ,4 ]
Xue, Minhui [2 ]
Qian, Haifeng [1 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Univ Adelaide, Adelaide, SA, Australia
[3] Univ New South Wales, Sydney, NSW, Australia
[4] Data61 CSIRO, Sydney, NSW, Australia
基金
澳大利亚研究理事会;
关键词
Data poisoning; Deep learning privacy; Facial recognition; Multi-cloaks;
D O I
10.1007/978-3-030-90567-5_28
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes [37] (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present ORIOLE, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by ORIOLE. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed ORIOLE system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the ORIOLE system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.
引用
收藏
页码:550 / 568
页数:19
相关论文
共 50 条
  • [31] A Privacy-Preserving Testing Framework for Copyright Protection of Deep Learning Models
    Wei, Dongying
    Wang, Dan
    Wang, Zhiheng
    Ma, Yingyi
    [J]. ELECTRONICS, 2024, 13 (01)
  • [32] Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?
    Oprea, Alina
    Singhal, Anoop
    Vassilev, Apostol
    [J]. COMPUTER, 2022, 55 (11) : 94 - 99
  • [33] A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness
    Feng, Tiantian
    Hebbar, Rajat
    Mehlman, Nicholas
    Shi, Xuan
    Kommineni, Aditya
    Narayanan, Shrikanth
    [J]. APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2023, 12 (03)
  • [34] Deep Learning with Label Differential Privacy
    Ghazi, Badih
    Golowich, Noah
    Kumar, Ravi
    Manurangsi, Pasin
    Zhang, Chiyuan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [35] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    [J]. 2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 909 - 910
  • [36] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1310 - 1321
  • [37] Local Differential Privacy for Deep Learning
    Arachchige, Pathum Chamikara Mahawaga
    Bertok, Peter
    Khalil, Ibrahim
    Liu, Dongxi
    Camtepe, Seyit
    Atiquzzaman, Mohammed
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5827 - 5842
  • [38] Evaluating Privacy Risks of Deep Learning Based General-Purpose Language Models
    Pan, Xudong
    Zhang, Mi
    Yan, Yifan
    Lu, Yifan
    Yang, Min
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 1092 - 1105
  • [39] Evidential deep learning for trustworthy prediction of enzyme commission number
    Han, So-Ra
    Park, Mingyu
    Kosaraju, Sai
    Lee, JeungMin
    Lee, Hyun
    Lee, Jun Hyuck
    Oh, Tae-Jin
    Kang, Mingon
    [J]. BRIEFINGS IN BIOINFORMATICS, 2023, 25 (01)
  • [40] Trustworthy clinical AI solutions: A unified review of uncertainty quantification in Deep Learning models for medical image analysis
    Lambert, Benjamin
    Forbes, Florence
    Doyle, Senan
    Dehaene, Harmonie
    Dojat, Michel
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 150