Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval

被引:37
|
作者
Xie, Liang [1 ]
Zhu, Lei [2 ]
Chen, Guoqi [3 ]
机构
[1] Wuhan Univ Technol, Dept Math, Wuhan, Peoples R China
[2] Singapore Management Univ, Sch Informat Syst, Singapore, Singapore
[3] Wuhan Univ Technol, Sch Automat, Wuhan, Peoples R China
关键词
Cross-modal hashing; Multi-graph learning; Cross-media retrieval; IMAGE;
D O I
10.1007/s11042-016-3432-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cross-modal relationship should be learned from training data with scarce label information. The second is that appropriate weights should be assigned for different modalities to reflect their importance. The last is the scalability of training process which is usually ignored by previous methods. In this paper, we propose Multi-graph Cross-modal Hashing (MGCMH) by comprehensively considering these three points. MGCMH is unsupervised method which integrates multi-graph learning and hash function learning into a joint framework, to learn unified hash space for all modalities. In MGCMH, different modalities are assigned with proper weights for the generation of multi-graph and hash codes respectively. As a result, more precise cross-modal relationship can be preserved in the hash space. Then Nystrom approximation approach is leveraged to efficiently construct the graphs. Finally an alternating learning algorithm is proposed to jointly optimize the modality weights, hash codes and functions. Experiments conducted on two real-world multi-modal datasets demonstrate the effectiveness of our method, in comparison with several representative cross-modal hashing methods.
引用
收藏
页码:9185 / 9204
页数:20
相关论文
共 50 条
  • [1] Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
    Liang Xie
    Lei Zhu
    Guoqi Chen
    [J]. Multimedia Tools and Applications, 2016, 75 : 9185 - 9204
  • [2] Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval
    Li, Mingyong
    Wang, Hongya
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 183 - 191
  • [3] Robust Unsupervised Cross-modal Hashing for Multimedia Retrieval
    Cheng, Miaomiao
    Jing, Liping
    Ng, Michael K.
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2020, 38 (03)
  • [4] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Yu, Jun
    Wu, Xiao-Jun
    Zhang, Donglin
    [J]. COGNITIVE COMPUTATION, 2022, 14 (03) : 1159 - 1171
  • [5] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Jun Yu
    Xiao-Jun Wu
    Donglin Zhang
    [J]. Cognitive Computation, 2022, 14 : 1159 - 1171
  • [6] Deep Joint-Semantics Reconstructing Hashing for Large-Scale Unsupervised Cross-Modal Retrieval
    Su, Shupeng
    Zhong, Zhisheng
    Zhang, Chao
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3027 - 3035
  • [7] CLIP-based fusion-modal reconstructing hashing for large-scale unsupervised cross-modal retrieval
    Mingyong, Li
    Yewen, Li
    Mingyuan, Ge
    Longfei, Ma
    [J]. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (01)
  • [8] CLIP-based fusion-modal reconstructing hashing for large-scale unsupervised cross-modal retrieval
    Li Mingyong
    Li Yewen
    Ge Mingyuan
    Ma Longfei
    [J]. International Journal of Multimedia Information Retrieval, 2023, 12
  • [9] SEMI-SUPERVISED GRAPH CONVOLUTIONAL HASHING NETWORK FOR LARGE-SCALE CROSS-MODAL RETRIEVAL
    Shen, Zhanjian
    Zhai, Deming
    Liu, Xianming
    Jiang, Junjun
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2366 - 2370
  • [10] Joint-Modal Graph Convolutional Hashing for unsupervised cross-modal retrieval
    Meng, Hui
    Zhang, Huaxiang
    Liu, Li
    Liu, Dongmei
    Lu, Xu
    Guo, Xinru
    [J]. NEUROCOMPUTING, 2024, 595