Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval

被引:5
|
作者
Chen, Dong [1 ]
Cheng, Miaomiao [1 ]
Min, Chen [1 ]
Jing, Liping [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, Beijing, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
cross-modal retrieval; partial multimodal data; cross-modal hashing; imputation; unsupervised learning;
D O I
10.1109/ijcnn48605.2020.9206611
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal retrieval, given the data of one specific modality as a query, aims to search the relevant data in other modalities. Recently, cross-modal hashing has attracted much attention due to its high efficiency and low storage cost. Its main idea is to approximate the cross-modality similarity via binary codes. This kind of method works well when the cross-modal data is completely observed. However, the real-world application usually avoids this situation, where part of the information is unobserved in some modality. Such partial multimodal data will result in the lack of pairwise information and then destroy the performance of cross-modal hashing. In this paper, we proposed a novel unsupervised cross-modal hashing approach, named as Unsupervised Deep Imputed Hashing (UDIH). It is a two-stage learning strategy. Firstly, the unobserved pairwise data is imputed by the proposed generators. Then a neural network with weighted triplet loss is applied on the correlation graph to learn the hashing code in the Hamming space for each modality, where the correlation graph is constructed with the aid of augmented data. UDIH has the ability to preserve the semantic consistency and difference among data objects. The extensive experimental results have shown that the proposed method outperforms the state-of-the-art methods on two benchmark datasets (MIRFlickr and NUS-WIDE). The source code could be available at https://github.com/AkChen/UDIH
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Deep Unsupervised Momentum Contrastive Hashing for Cross-modal Retrieval
    Lu, Kangkang
    Yu, Yanhua
    Liang, Meiyu
    Zhang, Min
    Cao, Xiaowen
    Zhao, Zehua
    Yin, Mengran
    Xue, Zhe
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 126 - 131
  • [2] Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval
    Li, Mingyong
    Wang, Hongya
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 183 - 191
  • [3] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Yu, Jun
    Wu, Xiao-Jun
    Zhang, Donglin
    [J]. COGNITIVE COMPUTATION, 2022, 14 (03) : 1159 - 1171
  • [4] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Jun Yu
    Xiao-Jun Wu
    Donglin Zhang
    [J]. Cognitive Computation, 2022, 14 : 1159 - 1171
  • [5] Unsupervised Deep Fusion Cross-modal Hashing
    Huang, Jiaming
    Min, Chen
    Jing, Liping
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 358 - 366
  • [6] Robust Unsupervised Cross-modal Hashing for Multimedia Retrieval
    Cheng, Miaomiao
    Jing, Liping
    Ng, Michael K.
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2020, 38 (03)
  • [7] Pseudo-label driven deep hashing for unsupervised cross-modal retrieval
    Zeng, XianHua
    Xu, Ke
    Xie, YiCai
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (10) : 3437 - 3456
  • [8] Deep noise mitigation and semantic reconstruction hashing for unsupervised cross-modal retrieval
    Cheng Zhang
    Yuan Wan
    Haopeng Qiang
    [J]. Neural Computing and Applications, 2024, 36 : 5383 - 5397
  • [9] Deep Semantic-Preserving Reconstruction Hashing for Unsupervised Cross-Modal Retrieval
    Cheng, Shuli
    Wang, Liejun
    Du, Anyu
    [J]. ENTROPY, 2020, 22 (11) : 1 - 22
  • [10] Deep noise mitigation and semantic reconstruction hashing for unsupervised cross-modal retrieval
    Zhang, Cheng
    Wan, Yuan
    Qiang, Haopeng
    [J]. NEURAL COMPUTING & APPLICATIONS, 2024, 36 (10): : 5383 - 5397