Semantic-Driven Interpretable Deep Multi-Modal Hashing for Large-Scale Multimedia Retrieval

被引:25
|
作者
Lu, Xu [1 ]
Liu, Li [1 ]
Nie, Liqiang [2 ]
Chang, Xiaojun [3 ]
Zhang, Huaxiang [1 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Jinan 250358, Peoples R China
[2] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
基金
中国国家自然科学基金;
关键词
Semantics; Task analysis; Data models; Feature extraction; Redundancy; Fuses; Optimization; Multi-modal hashing; large-scale multimedia; retrieval; interpretable hashing;
D O I
10.1109/TMM.2020.3044473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modal hashing focuses on fusing different modalities and exploring the complementarity of heterogeneous multi-modal data for compact hash learning. However, existing multi-modal hashing methods still suffer from several problems, including: 1) Almost all existing methods generate unexplainable hash codes. They roughly assume that the contribution of each hash code bit to the retrieval results is the same, ignoring the discriminative information embedded in hash learning and semantic similarity in hash retrieval. Moreover, the length of hash code is empirically set, which will cause bit redundancy and affect retrieval accuracy. 2) Most existing methods exploit shallow models which fail to fully capture higher-level correlation of multi-modal data. 3) Most existing methods adopt online hashing strategy based on immutable direct projection, which generates query codes for new samples without considering the differences of semantic categories. In this paper, we propose a Semantic-driven Interpretable Deep Multi-modal Hashing (SIDMH) method to generate interpretable hash codes driven by semantic categories within a deep hashing architecture, which can solve all these three problems in an integrated model. The main contributions are: 1) A novel deep multi-modal hashing network is developed to progressively extract hidden representations of heterogeneous modality features and deeply exploit the complementarity of multi-modal data. 2) Learning interpretable hash codes, with discriminant information of different categories distinctively embedded into hash codes and their different impacts on hash retrieval intuitively explained. Besides, the code length depends on the number of categories in the dataset, which can reduce the bit redundancy and improve the retrieval accuracy. 3) The semantic-driven online hashing strategy encodes the significant branches and discards the negligible branches of each query sample according to the semantics contained in it, therefore it could capture different semantics in dynamic queries. Finally, we consider both the nearest neighbor similarity and semantic similarity of hash codes. Experiments on several public multimedia retrieval datasets validate the superiority of the proposed method.
引用
收藏
页码:4541 / 4554
页数:14
相关论文
共 50 条
  • [1] Flexible Online Multi-modal Hashing for Large-scale Multimedia Retrieval
    Lu, Xu
    Zhu, Lei
    Cheng, Zhiyong
    Li, Jingjing
    Nie, Xiushan
    Zhang, Huaxiang
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1129 - 1137
  • [2] Fast Discrete Collaborative Multi-Modal Hashing for Large-Scale Multimedia Retrieval
    Zheng, Chaoqun
    Zhu, Lei
    Lu, Xu
    Li, Jingjing
    Cheng, Zhiyong
    Zhang, Hanwang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2020, 32 (11) : 2171 - 2184
  • [3] CLIP Multi-modal Hashing for Multimedia Retrieval
    Zhu, Jian
    Sheng, Mingkai
    Huang, Zhangmin
    Chang, Jingfei
    Jiang, Jinling
    Long, Jian
    Luo, Cheng
    Liu, Lei
    MULTIMEDIA MODELING, MMM 2025, PT I, 2025, 15520 : 195 - 205
  • [4] Flexible Multi-modal Hashing for Scalable Multimedia Retrieval
    Zhu, Lei
    Lu, Xu
    Cheng, Zhiyong
    Li, Jingjing
    Zhang, Huaxiang
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2020, 11 (02)
  • [5] Multi-Modal Hashing for Efficient Multimedia Retrieval: A Survey
    Zhu, Lei
    Zheng, Chaoqun
    Guan, Weili
    Li, Jingjing
    Yang, Yang
    Shen, Heng Tao
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (01) : 239 - 260
  • [6] Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
    Xie, Liang
    Zhu, Lei
    Chen, Guoqi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (15) : 9185 - 9204
  • [7] Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrieval
    Liang Xie
    Lei Zhu
    Guoqi Chen
    Multimedia Tools and Applications, 2016, 75 : 9185 - 9204
  • [8] DEEP SEMANTIC ADVERSARIAL HASHING BASED ON AUTOENCODER FOR LARGE-SCALE CROSS-MODAL RETRIEVAL
    Li, Mingyong
    Wang, Hongya
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [9] Graph Convolutional Multi-modal Hashing for Flexible Multimedia Retrieval
    Lu, Xu
    Zhu, Lei
    Liu, Li
    Nie, Liqiang
    Zhang, Huaxiang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1414 - 1422
  • [10] Fast Semantic Preserving Hashing for Large-Scale Cross-Modal Retrieval
    Wang, Xingzhi
    Liu, Xin
    Peng, Shujuan
    Cheung, Yiu-ming
    Hu, Zhikai
    Wang, Nannan
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 1348 - 1353