Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval

被引:5
|
作者
Chen, Dong [1 ]
Cheng, Miaomiao [1 ]
Min, Chen [1 ]
Jing, Liping [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, Beijing, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
cross-modal retrieval; partial multimodal data; cross-modal hashing; imputation; unsupervised learning;
D O I
10.1109/ijcnn48605.2020.9206611
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal retrieval, given the data of one specific modality as a query, aims to search the relevant data in other modalities. Recently, cross-modal hashing has attracted much attention due to its high efficiency and low storage cost. Its main idea is to approximate the cross-modality similarity via binary codes. This kind of method works well when the cross-modal data is completely observed. However, the real-world application usually avoids this situation, where part of the information is unobserved in some modality. Such partial multimodal data will result in the lack of pairwise information and then destroy the performance of cross-modal hashing. In this paper, we proposed a novel unsupervised cross-modal hashing approach, named as Unsupervised Deep Imputed Hashing (UDIH). It is a two-stage learning strategy. Firstly, the unobserved pairwise data is imputed by the proposed generators. Then a neural network with weighted triplet loss is applied on the correlation graph to learn the hashing code in the Hamming space for each modality, where the correlation graph is constructed with the aid of augmented data. UDIH has the ability to preserve the semantic consistency and difference among data objects. The extensive experimental results have shown that the proposed method outperforms the state-of-the-art methods on two benchmark datasets (MIRFlickr and NUS-WIDE). The source code could be available at https://github.com/AkChen/UDIH
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [32] Cross-Modal Hashing Retrieval Based on Deep Residual Network
    Li, Zhiyi
    Xu, Xiaomian
    Zhang, Du
    Zhang, Peng
    [J]. COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2021, 36 (02): : 383 - 405
  • [33] A triple fusion model for cross-modal deep hashing retrieval
    Hufei Wang
    Kaiqiang Zhao
    Dexin Zhao
    [J]. Multimedia Systems, 2023, 29 : 347 - 359
  • [34] Deep semantic hashing with dual attention for cross-modal retrieval
    Jiagao Wu
    Weiwei Weng
    Junxia Fu
    Linfeng Liu
    Bin Hu
    [J]. Neural Computing and Applications, 2022, 34 : 5397 - 5416
  • [35] Discriminative deep asymmetric supervised hashing for cross-modal retrieval
    Qiang, Haopeng
    Wan, Yuan
    Liu, Ziyi
    Xiang, Lun
    Meng, Xiaojing
    [J]. Knowledge-Based Systems, 2022, 204
  • [36] Deep semantic similarity adversarial hashing for cross-modal retrieval
    Qiang, Haopeng
    Wan, Yuan
    Xiang, Lun
    Meng, Xiaojing
    [J]. NEUROCOMPUTING, 2020, 400 : 24 - 33
  • [37] A novel deep translated attention hashing for cross-modal retrieval
    Haibo Yu
    Ran Ma
    Min Su
    Ping An
    Kai Li
    [J]. Multimedia Tools and Applications, 2022, 81 : 26443 - 26461
  • [38] Discriminative deep asymmetric supervised hashing for cross-modal retrieval
    Qiang, Haopeng
    Wan, Yuan
    Liu, Ziyi
    Xiang, Lun
    Meng, Xiaojing
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 204
  • [39] A triple fusion model for cross-modal deep hashing retrieval
    Wang, Hufei
    Zhao, Kaiqiang
    Zhao, Dexin
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (01) : 347 - 359
  • [40] Deep semantic hashing with dual attention for cross-modal retrieval
    Wu, Jiagao
    Weng, Weiwei
    Fu, Junxia
    Liu, Linfeng
    Hu, Bin
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (07): : 5397 - 5416