Set and Rebase: Determining the Semantic Graph Connectivity for Unsupervised Cross-Modal Hashing

被引:0
|
作者
Wang, Weiwei [1 ]
Shen, Yuming [2 ]
Zhang, Haofeng [1 ]
Yao, Yazhou [1 ]
Liu, Li [2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Jiangsu, Peoples R China
[2] Incept Inst Artificial Intelligence IIAI, Abu Dhabi, U Arab Emirates
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The label-free nature of unsupervised cross-modal hashing hinders models from exploiting the exact semantic data similarity. Existing research typically simulates the semantics by a heuristic geometric prior in the original feature space. However, this introduces heavy bias into the model as the original features are not fully representing the underlying multi-view data relations. To address the problem above, in this paper, we propose a novel unsupervised hashing method called Semantic-Rebased Cross-modal Hashing (SRCH). A novel 'Set-and-Rebase' process is defined to initialize and update the cross-modal similarity graph of training data. In particular, we set the graph according to the intramodal feature geometric basis and then alternately rebase it to update the edges within according to the hashing results. We develop an alternating optimization routine to rebase the graph and train the hashing auto-encoders with closed-form solutions so that the overall framework is efficiently trained. Our experimental results on benchmarked datasets demonstrate the superiority of our model against state-of-the-art algorithms.
引用
收藏
页码:853 / 859
页数:7
相关论文
共 50 条
  • [31] Graph Convolutional Network Hashing for Cross-Modal Retrieval
    Xu, Ruiqing
    Li, Chao
    Yan, Junchi
    Deng, Cheng
    Liu, Xianglong
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 982 - 988
  • [32] Local Graph Convolutional Networks for Cross-Modal Hashing
    Chen, Yudong
    Wang, Sen
    Lu, Jianglin
    Chen, Zhi
    Zhang, Zheng
    Huang, Zi
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1921 - 1928
  • [33] Collaborative Subspace Graph Hashing for Cross-modal Retrieval
    Zhang, Xiang
    Dong, Guohua
    Du, Yimo
    Wu, Chengkun
    Luo, Zhigang
    Yang, Canqun
    [J]. ICMR '18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2018, : 213 - 221
  • [34] Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
    Xie, Liang
    Zhu, Lei
    Chen, Guoqi
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (15) : 9185 - 9204
  • [35] Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
    Liang Xie
    Lei Zhu
    Guoqi Chen
    [J]. Multimedia Tools and Applications, 2016, 75 : 9185 - 9204
  • [36] Unsupervised cross-modal retrieval via Multi-modal graph regularized Smooth Matrix Factorization Hashing
    Fang, Yixian
    Zhang, Huaxiang
    Ren, Yuwei
    [J]. KNOWLEDGE-BASED SYSTEMS, 2019, 171 : 69 - 80
  • [37] Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval
    Li, Mingyong
    Wang, Hongya
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 183 - 191
  • [38] UNSUPERVISED CONTRASTIVE HASHING FOR CROSS-MODAL RETRIEVAL IN REMOTE SENSING
    Mikriukov, Georgii
    Ravanbakhsh, Mahdyar
    Demir, Begum
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4463 - 4467
  • [39] Deep Unsupervised Momentum Contrastive Hashing for Cross-modal Retrieval
    Lu, Kangkang
    Yu, Yanhua
    Liang, Meiyu
    Zhang, Min
    Cao, Xiaowen
    Zhao, Zehua
    Yin, Mengran
    Xue, Zhe
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 126 - 131
  • [40] Unsupervised Deep Imputed Hashing for Partial Cross-modal Retrieval
    Chen, Dong
    Cheng, Miaomiao
    Min, Chen
    Jing, Liping
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,