Proactive Privacy-preserving Learning for Retrieval

被引:0
|
作者
Zhang, Peng-Fei [1 ]
Huang, Zi [1 ]
Xu, Xin-Shun [2 ]
机构
[1] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld, Australia
[2] Shandong Univ, Sch Software, Jinan, Peoples R China
基金
澳大利亚研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) have recently achieved remarkable performance in image retrieval, yet posing great threats to data privacy. On the one hand, one may misuse a deployed DNNs based system to look up data without consent. On the other hand, organizations or individuals would legally or illegally collect data to train high-performance models outside the scope of legitimate purposes. Unfortunately, less effort has been made to safeguard data privacy against malicious uses of DNNs. In this paper, we propose a data-centric Proactive Privacy-preserving Learning (PPL) algorithm for hashing based retrieval, which achieves the protection purpose by employing a generator to transfer the original data into the adversarial data with quasi-imperceptible perturbations before releasing them. When the data source is infiltrated, the adversarial data can confuse menacing retrieval models to make erroneous predictions. Given that the prior knowledge of malicious models is not available, a surrogate retrieval model is instead introduced acting as a fooling target. The framework is trained by a two-player game conducted between the generator and the surrogate model. More specifically, the generator is updated to enlarge the gap between the adversarial data and the original data, aiming to lower the search accuracy of the surrogate model. On the contrary, the surrogate model is trained with the opposing objective that is to maintain the search performance. As a result, an effective and robust adversarial generator is encouraged. Furthermore, to facilitate an effective optimization, a Gradient Reversal Layer (GRL) module is inserted to connect two models, enabling the two-player game in a one-step learning. Extensive experiments on three widely-used realistic datasets prove the effectiveness of the proposed method.
引用
收藏
页码:3369 / 3376
页数:8
相关论文
共 50 条
  • [1] Proactive Privacy-preserving Learning for Cross-modal Retrieval
    Zhang, Peng-Fei
    Bai, Guangdong
    Yin, Hongzhi
    Huang, Zi
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (02)
  • [2] Privacy-Preserving Machine Learning
    Chow, Sherman S. M.
    [J]. FRONTIERS IN CYBER SECURITY, 2018, 879 : 3 - 6
  • [3] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    [J]. 2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 909 - 910
  • [4] Privacy-Preserving Classifier Learning
    Brickell, Justin
    Shmatikov, Vitaly
    [J]. FINANCIAL CRYPTOGRAPHY AND DATA SECURITY, 2009, 5628 : 128 - 147
  • [5] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1310 - 1321
  • [6] Privacy-preserving image retrieval in a distributed environment
    Zhou, Fucai
    Qin, Shiyue
    Hou, Ruitao
    Zhang, Zongye
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (10) : 7478 - 7501
  • [7] Toward privacy-preserving JPEG image retrieval
    Cheng, Hang
    Wang, Jingyue
    Wang, Meiqing
    Zhong, Shangping
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2017, 26 (04)
  • [8] Privacy-Preserving Machine Learning [Cryptography]
    Kerschbaum, Florian
    Lukas, Nils
    [J]. IEEE SECURITY & PRIVACY, 2023, 21 (06) : 90 - 94
  • [9] Privacy-Preserving Personalized Federated Learning
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [10] Privacy-Preserving Stochastic Gradual Learning
    Han, Bo
    Tsang, Ivor W.
    Xiao, Xiaokui
    Chen, Ling
    Fung, Sai-Fu
    Yu, Celina P.
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (08) : 3129 - 3140