Proactive Privacy-preserving Learning for Cross-modal Retrieval

被引:8
|
作者
Zhang, Peng-Fei [1 ]
Bai, Guangdong [1 ]
Yin, Hongzhi [1 ]
Huang, Zi [1 ]
机构
[1] Univ Queensland, Brisbane, Qld 4072, Australia
基金
澳大利亚研究理事会;
关键词
Privacy protection; cross-modal retrieval; deep learning; adversarial data;
D O I
10.1145/3545799
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep cross-modal retrieval techniques have recently achieved remarkable performance, which also poses severe threats to data privacy potentially. Nowadays, enormous user-generated contents that convey personal information are released and shared on the Internet. One may abuse a retrieval system to pinpoint sensitive information of a particular Internet user, causing privacy leakage. In this article, we propose a data-centric Proactive Privacy-preserving Cross-modal Learning algorithm that fulfills the protection purpose by employing a generator to transform original data into adversarial data with quasi-imperceptible perturbations before releasing them. When the data source is infiltrated, the inside adversarial data can confuse retrieval models under the attacker's control to make erroneous predictions. We consider the protection under a realistic and challenging setting where the prior knowledge of malicious models is agnostic. To handle this, a surrogate retrieval model is instead introduced, acting as the target to fool. The whole network is trained under a game-theoretical framework, where the generator and the retrieval model persistently evolve to fight against each other. To facilitate the optimization, a Gradient Reversal Layer module is inserted between two models, enabling a one-step learning fashion. Extensive experiments on widely used realistic datasets prove the effectiveness of the proposed method.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Proactive Privacy-preserving Learning for Retrieval
    Zhang, Peng-Fei
    Huang, Zi
    Xu, Xin-Shun
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3369 - 3376
  • [2] An Efficient Cross-Modal Privacy-Preserving Image-Text Retrieval Scheme
    Zhang, Kejun
    Xu, Shaofei
    Song, Yutuo
    Xu, Yuwei
    Li, Pengcheng
    Yang, Xiang
    Zou, Bing
    Wang, Wenbin
    SYMMETRY-BASEL, 2024, 16 (08):
  • [3] Dual-branch networks for privacy-preserving cross-modal retrieval in cloud computing
    Jianting Peng
    Xuyu Xiang
    Jiaohua Qin
    Yun Tan
    Xiang, Xuyu (xyuxiang@163.com), 2025, 81 (01):
  • [4] DAP2CMH: Deep Adversarial Privacy-Preserving Cross-Modal Hashing
    Zhu, Lei
    Song, Jiayu
    Yang, Zhan
    Huang, Wenti
    Zhang, Chengyuan
    Yu, Weiren
    NEURAL PROCESSING LETTERS, 2022, 54 (04) : 2549 - 2569
  • [5] HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
    Zhang, Chengyuan
    Song, Jiayu
    Zhu, Xiaofeng
    Zhu, Lei
    Zhang, Shichao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [6] Learning DALTS for cross-modal retrieval
    Yu, Zheng
    Wang, Wenmin
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2019, 4 (01) : 9 - 16
  • [7] Continual learning in cross-modal retrieval
    Wang, Kai
    Herranz, Luis
    van de Weijer, Joost
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3623 - 3633
  • [8] Sequential Learning for Cross-modal Retrieval
    Song, Ge
    Tan, Xiaoyang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4531 - 4539
  • [9] Deep Adversarial Learning Triplet Similarity Preserving Cross-Modal Retrieval Algorithm
    Li, Guokun
    Wang, Zhen
    Xu, Shibo
    Feng, Chuang
    Yang, Xiaohan
    Wu, Nannan
    Sun, Fuzhen
    MATHEMATICS, 2022, 10 (15)
  • [10] Learning latent hash codes with discriminative structure preserving for cross-modal retrieval
    Zhang, Donglin
    Wu, Xiao-Jun
    Yu, Jun
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (01) : 283 - 297