Hashing for Cross-Modal Similarity Retrieval

被引:5
|
作者
Liu, Yao [1 ]
Yuan, Yanhong [1 ]
Huang, Qiaoli [1 ]
Huang, Zhixing [1 ]
机构
[1] Southwest Univ, Sch Comp & Informat Sci, Semant Grid Lab, Chongqing 400715, Peoples R China
关键词
D O I
10.1109/SKG.2015.9
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Now, cross-modal retrieval similarity on multimedia with texts and images have attracted scholars' more and more attention. The difficulty of cross-modal retrieval is how to effectively construct correlation between multi-modal heterogeneous data. According to canonical correlation analysis, most existing cross-modal methods embed the heterogeneous data into a joint abstraction space by linear projections. The recognition accuracy is an urgent problem. To address this challenge, in this paper, we propose a adaptive boosting method with weighted CCA and Hash to solve cross-modal similarity retrieval. We use hash method to speed up the retrieval efficiency. Using the CCA to connect the image and text data with the original features. The weight is the input parameter of adaboost algorithm. We choose this algorithm to change the weight for reduce the CCA mapping error rate. We first capture the text component which is represented as a sample from a hidden topic model, learned with latent dirichlet allocation, and images are represented as bags of visual (SIFT and GIST) features. Second, the unified hash codes are generated through the high level abstraction space by hash method such as spectral hashing, kernelized locality semantic hashing and iterative quantization. Third, correlations between the two components are learned with weighted canonical correlation analysis. In this part, we use the Adaboost to iterate this process to find the best result. Finally, find out the nearest neighbors. Extensive experiments on two different datasets highlight the advantage of our method.
引用
收藏
页码:1 / 8
页数:8
相关论文
共 50 条
  • [1] Deep Hashing Similarity Learning for Cross-Modal Retrieval
    Ma, Ying
    Wang, Meng
    Lu, Guangyun
    Sun, Yajun
    [J]. IEEE ACCESS, 2024, 12 : 8609 - 8618
  • [2] Deep semantic similarity adversarial hashing for cross-modal retrieval
    Qiang, Haopeng
    Wan, Yuan
    Xiang, Lun
    Meng, Xiaojing
    [J]. NEUROCOMPUTING, 2020, 400 : 24 - 33
  • [3] Two-Stage Asymmetric Similarity Preserving Hashing for Cross-Modal Retrieval
    Huang, Junfan
    Kang, Peipei
    Han, Na
    Chen, Yonghao
    Fang, Xiaozhao
    Gao, Hongbo
    Zhou, Guoxu
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (01) : 429 - 444
  • [4] Gaussian similarity preserving for cross-modal hashing
    Lin, Liuyin
    Shu, Xin
    [J]. NEUROCOMPUTING, 2022, 494 : 446 - 454
  • [5] Semantic consistency hashing for cross-modal retrieval
    Yao, Tao
    Kong, Xiangwei
    Fu, Haiyan
    Tian, Qi
    [J]. NEUROCOMPUTING, 2016, 193 : 250 - 259
  • [6] Fast Unmediated Hashing for Cross-Modal Retrieval
    Nie, Xiushan
    Liu, Xingbo
    Xi, Xiaoming
    Li, Chenglong
    Yin, Yilong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (09) : 3669 - 3678
  • [7] Efficient Discriminative Hashing for Cross-Modal Retrieval
    Huang, Junfan
    Kang, Peipei
    Fang, Xiaozhao
    Han, Na
    Xie, Shengli
    Gao, Hongbo
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (06): : 3865 - 3878
  • [8] Hierarchical Consensus Hashing for Cross-Modal Retrieval
    Sun, Yuan
    Ren, Zhenwen
    Hu, Peng
    Peng, Dezhong
    Wang, Xu
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 824 - 836
  • [9] Random Online Hashing for Cross-Modal Retrieval
    Jiang, Kaihang
    Wong, Wai Keung
    Fang, Xiaozhao
    Li, Jiaxing
    Qin, Jianyang
    Xie, Shengli
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 15
  • [10] GrowBit: Incremental Hashing for Cross-Modal Retrieval
    Mandal, Devraj
    Annadani, Yashas
    Biswas, Soma
    [J]. COMPUTER VISION - ACCV 2018, PT IV, 2019, 11364 : 305 - 321