Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval

被引:5
|
作者
Chen, Dong [1 ]
Cheng, Miaomiao [1 ]
Min, Chen [1 ]
Jing, Liping [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, Beijing, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
cross-modal retrieval; partial multimodal data; cross-modal hashing; imputation; unsupervised learning;
D O I
10.1109/ijcnn48605.2020.9206611
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal retrieval, given the data of one specific modality as a query, aims to search the relevant data in other modalities. Recently, cross-modal hashing has attracted much attention due to its high efficiency and low storage cost. Its main idea is to approximate the cross-modality similarity via binary codes. This kind of method works well when the cross-modal data is completely observed. However, the real-world application usually avoids this situation, where part of the information is unobserved in some modality. Such partial multimodal data will result in the lack of pairwise information and then destroy the performance of cross-modal hashing. In this paper, we proposed a novel unsupervised cross-modal hashing approach, named as Unsupervised Deep Imputed Hashing (UDIH). It is a two-stage learning strategy. Firstly, the unobserved pairwise data is imputed by the proposed generators. Then a neural network with weighted triplet loss is applied on the correlation graph to learn the hashing code in the Hamming space for each modality, where the correlation graph is constructed with the aid of augmented data. UDIH has the ability to preserve the semantic consistency and difference among data objects. The extensive experimental results have shown that the proposed method outperforms the state-of-the-art methods on two benchmark datasets (MIRFlickr and NUS-WIDE). The source code could be available at https://github.com/AkChen/UDIH
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Deep supervised fused similarity hashing for cross-modal retrieval
    Ng, Wing W. Y.
    Xu, Yongzhi
    Tian, Xing
    Wang, Hui
    [J]. Multimedia Tools and Applications, 2024, 83 (39) : 86537 - 86555
  • [42] Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval
    Yang, Erkun
    Deng, Cheng
    Liu, Wei
    Liu, Xianglong
    Tao, Dacheng
    Gao, Xinbo
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1618 - 1625
  • [43] Deep Visual-Semantic Hashing for Cross-Modal Retrieval
    Cao, Yue
    Long, Mingsheng
    Wang, Jianmin
    Yang, Qiang
    Yu, Philip S.
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1445 - 1454
  • [44] Deep semantic hashing with dual attention for cross-modal retrieval
    Wu, Jiagao
    Weng, Weiwei
    Fu, Junxia
    Liu, Linfeng
    Hu, Bin
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (07): : 5397 - 5416
  • [45] Deep Label Feature Fusion Hashing for Cross-Modal Retrieval
    Ren, Dongxiao
    Xu, Weihua
    Wang, Zhonghua
    Sun, Qinxiu
    [J]. IEEE ACCESS, 2022, 10 : 100276 - 100285
  • [46] A novel deep translated attention hashing for cross-modal retrieval
    Yu, Haibo
    Ma, Ran
    Su, Min
    An, Ping
    Li, Kai
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (18) : 26443 - 26461
  • [47] Cross-modal retrieval based on deep regularized hashing constraints
    Khan, Asad
    Hayat, Sakander
    Ahmad, Muhammad
    Wen, Jinyu
    Farooq, Muhammad Umar
    Fang, Meie
    Jiang, Wenchao
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6508 - 6530
  • [48] Hashing for Cross-Modal Similarity Retrieval
    Liu, Yao
    Yuan, Yanhong
    Huang, Qiaoli
    Huang, Zhixing
    [J]. 2015 11TH INTERNATIONAL CONFERENCE ON SEMANTICS, KNOWLEDGE AND GRIDS (SKG), 2015, : 1 - 8
  • [49] Online deep hashing for both uni-modal and cross-modal retrieval
    Xie, Yicai
    Zeng, Xianhua
    Wang, Tinghua
    Yi, Yun
    [J]. INFORMATION SCIENCES, 2022, 608 : 1480 - 1502
  • [50] Online deep hashing for both uni-modal and cross-modal retrieval
    Xie, Yicai
    Zeng, Xianhua
    Wang, Tinghua
    Yi, Yun
    [J]. Information Sciences, 2022, 608 : 1480 - 1502