Deep semantics-preserving cross-modal hashing

被引:0
|
作者
Lai, Zhihui [1 ,2 ]
Fang, Xiaomei [1 ,2 ]
Kong, Heng [3 ]
机构
[1] Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen,518060, China
[2] Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen,518060, China
[3] Department of Thyroid and Breast Surgery, BaoAn Central Hospital of Shenzhen, Shenzhen,518102, China
关键词
Cross-modal hashing has been paid widespread attention in recent years due to its outstanding performance in cross-modal data retrieval. Cross-modal hashing can be decomposed into two steps; i.e; the feature learning and the binarization. However; most existing cross-modal hash methods do not take the supervisory information of the data into consideration during binary quantization; and thus often fail to adequately preserve semantic information. To solve these problems; this paper proposes a novel deep cross-modal hashing method called deep semantics-preserving cross-modal hashing (DSCMH); which makes full use of intra and inter-modal semantic information to improve the model's performance. Moreover; by designing a label network for semantic alignment during the binarization process; DSCMH's performance can be further improved. In order to verify the performance of the proposed method; extensive experiments were conducted on four big datasets. The results show that the proposed method is better than most of the existing cross-modal hashing methods. In addition; the ablation experiment shows that the proposed new regularized terms all have positive effects on the model's performances in cross-modal retrieval. The code of this paper can be downloaded from http://www.scholat.com/laizhihui. © 2024 Elsevier Inc;
D O I
10.1016/j.bdr.2024.100494
中图分类号
学科分类号
摘要
引用
收藏
相关论文
共 50 条
  • [1] Self-supervised deep semantics-preserving Hashing for cross-modal retrieval
    Lu, Bo
    Duan, Xiaodong
    Yuan, Ye
    [J]. Qinghua Daxue Xuebao/Journal of Tsinghua University, 2022, 62 (09): : 1442 - 1449
  • [2] Global and local semantics-preserving based deep hashing for cross-modal retrieval
    Ma, Lei
    Li, Hongliang
    Meng, Fanman
    Wu, Qingbo
    Ngan, King Ngi
    [J]. NEUROCOMPUTING, 2018, 312 : 49 - 62
  • [3] Discriminative latent semantics-preserving similarity embedding hashing for cross-modal retrieval
    Chen, Yongfeng
    Tan, Junpeng
    Yang, Zhijing
    Cheng, Yongqiang
    Chen, Ruihan
    [J]. Neural Computing and Applications, 2024, 36 (18) : 10655 - 10680
  • [4] Semantics-preserving hashing based on multi-scale fusion for cross-modal retrieval
    Hong Zhang
    Min Pan
    [J]. Multimedia Tools and Applications, 2021, 80 : 17299 - 17314
  • [5] Semantics-preserving hashing based on multi-scale fusion for cross-modal retrieval
    Zhang, Hong
    Pan, Min
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 17299 - 17314
  • [6] Multi-label semantics preserving based deep cross-modal hashing
    Zou, Xitao
    Wang, Xinzhi
    Bakker, Erwin M.
    Wu, Song
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 93
  • [7] Semantics-Preserving Hashing for Cross-View Retrieval
    Lin, Zijia
    Ding, Guiguang
    Hu, Mingqing
    Wang, Jianmin
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 3864 - 3872
  • [8] Deep Consistency Preserving Network for Unsupervised Cross-Modal Hashing
    Li, Mengluan
    Guo, Yanqing
    Fu, Haiyan
    Li, Yi
    Su, Hong
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 235 - 246
  • [9] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [10] Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing
    Yang, Xiaohan
    Wang, Zhen
    Wu, Nannan
    Li, Guokun
    Feng, Chuang
    Liu, Pingping
    [J]. MATHEMATICS, 2022, 10 (15)