Pseudo-label driven deep hashing for unsupervised cross-modal retrieval

被引:0
|
作者
Zeng, XianHua [1 ]
Xu, Ke [1 ]
Xie, YiCai [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Hashing; Cross-modal retrieval; Unsupervised learning; Clustering; NETWORK;
D O I
10.1007/s13042-023-01842-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid development of big data and the Internet, cross-modal retrieval has become a popular research topic. Cross-modal hashing is an important research direction in cross-modal retrieval, due to its highly efficiency and small memory consumption. Recently, many unsupervised cross-modal hashing methods achieved great results on cross-modal retrieval tasks. However, how to narrow the heterogeneous gap between different modalities and generate more discriminative hash codes are still the main problems of unsupervised hashing. In this paper, we propose a novel unsupervised cross-modal hashing method Pseudo-label Driven Deep Hashing to solve aforementioned problems. We introduce clustering into our modal to obtain initialized semantical information called pseudo-label, and we propose a novel adjusting method that uses pseudo-labels to adjust joint-semantic similarity matrix. We construct a similarity consistency loss function that focuses on the heterogeneity gap between different modalities, and a real values and binary codes fine-tuning strategy for closing the gap between real value space and Hamming space. We conduct experiments on five datasets including three natural datasets which have larger inter-class distances and two medical datasets which have smaller inter-class distances, the results demonstrate the superiority of our method compared with several unsupervised cross-modal hashing methods.
引用
收藏
页码:3437 / 3456
页数:20
相关论文
共 50 条
  • [1] Pseudo-label driven deep hashing for unsupervised cross-modal retrieval
    XianHua Zeng
    Ke Xu
    YiCai Xie
    [J]. International Journal of Machine Learning and Cybernetics, 2023, 14 : 3437 - 3456
  • [2] Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval
    Chen, Dong
    Cheng, Miaomiao
    Min, Chen
    Jing, Liping
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Deep Unsupervised Momentum Contrastive Hashing for Cross-modal Retrieval
    Lu, Kangkang
    Yu, Yanhua
    Liang, Meiyu
    Zhang, Min
    Cao, Xiaowen
    Zhao, Zehua
    Yin, Mengran
    Xue, Zhe
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 126 - 131
  • [4] Clustering-driven Deep Adversarial Hashing for scalable unsupervised cross-modal retrieval
    Shen, Xiao
    Zhang, Haofeng
    Li, Lunbo
    Zhang, Zheng
    Chen, Debao
    Liu, Li
    [J]. NEUROCOMPUTING, 2021, 459 : 152 - 164
  • [5] Deep Label Feature Fusion Hashing for Cross-Modal Retrieval
    Ren, Dongxiao
    Xu, Weihua
    Wang, Zhonghua
    Sun, Qinxiu
    [J]. IEEE ACCESS, 2022, 10 : 100276 - 100285
  • [6] Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval
    Li, Mingyong
    Wang, Hongya
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 183 - 191
  • [7] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Jun Yu
    Xiao-Jun Wu
    Donglin Zhang
    [J]. Cognitive Computation, 2022, 14 : 1159 - 1171
  • [8] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Yu, Jun
    Wu, Xiao-Jun
    Zhang, Donglin
    [J]. COGNITIVE COMPUTATION, 2022, 14 (03) : 1159 - 1171
  • [9] Label-Based Deep Semantic Hashing for Cross-Modal Retrieval
    Weng, Weiwei
    Wu, Jiagao
    Yang, Lu
    Liu, Linfeng
    Hu, Bin
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2019), PT III, 2019, 11955 : 24 - 36
  • [10] Unsupervised Deep Fusion Cross-modal Hashing
    Huang, Jiaming
    Min, Chen
    Jing, Liping
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 358 - 366