Supervised Hierarchical Deep Hashing for Cross-Modal Retrieval

被引:37
|
作者
Zhan, Yu-Wei [1 ]
Luo, Xin [1 ]
Wang, Yongxin [1 ]
Xu, Xin-Shun [1 ]
机构
[1] Shandong Univ, Sch Software, Jinan, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal retrieval; learning to hash; hierarchy;
D O I
10.1145/3394171.3413962
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal hashing has attracted much attention in the large-scale multimedia search area. In many real applications, labels of samples have hierarchical structure which also contains much useful information for learning. However, most existing methods are originally designed for non-hierarchical labeled data and thus fail to exploit the rich information of the label hierarchy. In this paper, we propose an effective cross-modal hashing method, named Supervised Hierarchical Deep Cross-modal Hashing, SHDCH for short, to learn hash codes by explicitly delving into the hierarchical labels. Specifically, both the similarity at each layer of the label hierarchy and the relatedness across different layers are implanted into the hash-code learning. Besides, an iterative optimization algorithm is proposed to directly learn the discrete hash codes instead of relaxing the binary constraints. We conducted extensive experiments on two real-world datasets and the experimental results show the superior performance of SHDCH over several state-of-the-art methods.
引用
收藏
页码:3386 / 3394
页数:9
相关论文
共 50 条
  • [41] A novel deep translated attention hashing for cross-modal retrieval
    Haibo Yu
    Ran Ma
    Min Su
    Ping An
    Kai Li
    [J]. Multimedia Tools and Applications, 2022, 81 : 26443 - 26461
  • [42] Deep semantic hashing with dual attention for cross-modal retrieval
    Wu, Jiagao
    Weng, Weiwei
    Fu, Junxia
    Liu, Linfeng
    Hu, Bin
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (07): : 5397 - 5416
  • [43] Deep Label Feature Fusion Hashing for Cross-Modal Retrieval
    Ren, Dongxiao
    Xu, Weihua
    Wang, Zhonghua
    Sun, Qinxiu
    [J]. IEEE ACCESS, 2022, 10 : 100276 - 100285
  • [44] Deep Visual-Semantic Hashing for Cross-Modal Retrieval
    Cao, Yue
    Long, Mingsheng
    Wang, Jianmin
    Yang, Qiang
    Yu, Philip S.
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1445 - 1454
  • [45] Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval
    Yang, Erkun
    Deng, Cheng
    Liu, Wei
    Liu, Xianglong
    Tao, Dacheng
    Gao, Xinbo
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1618 - 1625
  • [46] Cross-modal retrieval based on deep regularized hashing constraints
    Khan, Asad
    Hayat, Sakander
    Ahmad, Muhammad
    Wen, Jinyu
    Farooq, Muhammad Umar
    Fang, Meie
    Jiang, Wenchao
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6508 - 6530
  • [47] A novel deep translated attention hashing for cross-modal retrieval
    Yu, Haibo
    Ma, Ran
    Su, Min
    An, Ping
    Li, Kai
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (18) : 26443 - 26461
  • [48] Hashing for Cross-Modal Similarity Retrieval
    Liu, Yao
    Yuan, Yanhong
    Huang, Qiaoli
    Huang, Zhixing
    [J]. 2015 11TH INTERNATIONAL CONFERENCE ON SEMANTICS, KNOWLEDGE AND GRIDS (SKG), 2015, : 1 - 8
  • [49] Online deep hashing for both uni-modal and cross-modal retrieval
    Xie, Yicai
    Zeng, Xianhua
    Wang, Tinghua
    Yi, Yun
    [J]. INFORMATION SCIENCES, 2022, 608 : 1480 - 1502
  • [50] Online deep hashing for both uni-modal and cross-modal retrieval
    Xie, Yicai
    Zeng, Xianhua
    Wang, Tinghua
    Yi, Yun
    [J]. Information Sciences, 2022, 608 : 1480 - 1502