Semantic-Driven Interpretable Deep Multi-Modal Hashing for Large-Scale Multimedia Retrieval

被引:25
|
作者
Lu, Xu [1 ]
Liu, Li [1 ]
Nie, Liqiang [2 ]
Chang, Xiaojun [3 ]
Zhang, Huaxiang [1 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Jinan 250358, Peoples R China
[2] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
基金
中国国家自然科学基金;
关键词
Semantics; Task analysis; Data models; Feature extraction; Redundancy; Fuses; Optimization; Multi-modal hashing; large-scale multimedia; retrieval; interpretable hashing;
D O I
10.1109/TMM.2020.3044473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-modal hashing focuses on fusing different modalities and exploring the complementarity of heterogeneous multi-modal data for compact hash learning. However, existing multi-modal hashing methods still suffer from several problems, including: 1) Almost all existing methods generate unexplainable hash codes. They roughly assume that the contribution of each hash code bit to the retrieval results is the same, ignoring the discriminative information embedded in hash learning and semantic similarity in hash retrieval. Moreover, the length of hash code is empirically set, which will cause bit redundancy and affect retrieval accuracy. 2) Most existing methods exploit shallow models which fail to fully capture higher-level correlation of multi-modal data. 3) Most existing methods adopt online hashing strategy based on immutable direct projection, which generates query codes for new samples without considering the differences of semantic categories. In this paper, we propose a Semantic-driven Interpretable Deep Multi-modal Hashing (SIDMH) method to generate interpretable hash codes driven by semantic categories within a deep hashing architecture, which can solve all these three problems in an integrated model. The main contributions are: 1) A novel deep multi-modal hashing network is developed to progressively extract hidden representations of heterogeneous modality features and deeply exploit the complementarity of multi-modal data. 2) Learning interpretable hash codes, with discriminant information of different categories distinctively embedded into hash codes and their different impacts on hash retrieval intuitively explained. Besides, the code length depends on the number of categories in the dataset, which can reduce the bit redundancy and improve the retrieval accuracy. 3) The semantic-driven online hashing strategy encodes the significant branches and discards the negligible branches of each query sample according to the semantics contained in it, therefore it could capture different semantics in dynamic queries. Finally, we consider both the nearest neighbor similarity and semantic similarity of hash codes. Experiments on several public multimedia retrieval datasets validate the superiority of the proposed method.
引用
收藏
页码:4541 / 4554
页数:14
相关论文
共 50 条
  • [21] Semantic-Driven Multimedia Retrieval with the MPEG Query Format
    Tous, Ruben
    Delgado, Jaime
    SEMANTIC MULTIMEDIA, PROCEEDINGS, 2008, 5392 : 149 - 163
  • [22] Semantic-driven multimedia retrieval with the MPEG Query Format
    Ruben Tous
    Jaime Delgado
    Multimedia Tools and Applications, 2010, 49 : 213 - 233
  • [23] Flexible Dual Multi-Modal Hashing for Incomplete Multi-Modal Retrieval
    Wei, Yuhong
    An, Junfeng
    INTERNATIONAL JOURNAL OF IMAGE AND GRAPHICS, 2024,
  • [24] Semantic-driven multimedia retrieval with the MPEG Query Format
    Tous, Ruben
    Delgado, Jaime
    MULTIMEDIA TOOLS AND APPLICATIONS, 2010, 49 (01) : 213 - 233
  • [25] One for more: Structured Multi-Modal Hashing for multiple multimedia retrieval tasks
    Zheng, Chaoqun
    Li, Fengling
    Zhu, Lei
    Zhang, Zheng
    Lu, Wenpeng
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 233
  • [26] Semantic-guided autoencoder adversarial hashing for large-scale cross-modal retrieval
    Mingyong Li
    Qiqi Li
    Yan Ma
    Degang Yang
    Complex & Intelligent Systems, 2022, 8 : 1603 - 1617
  • [27] Semantic-guided autoencoder adversarial hashing for large-scale cross-modal retrieval
    Li, Mingyong
    Li, Qiqi
    Ma, Yan
    Yang, Degang
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (02) : 1603 - 1617
  • [28] Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
    Niu, Yulei
    Lu, Zhiwu
    Wen, Ji-Rong
    Xiang, Tao
    Chang, Shih-Fu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (04) : 1720 - 1731
  • [29] Deep Bayesian Hashing With Center Prior for Multi-Modal Neuroimage Retrieval
    Yang, Erkun
    Liu, Mingxia
    Yao, Dongren
    Cao, Bing
    Lian, Chunfeng
    Yap, Pew-Thian
    Shen, Dinggang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (02) : 503 - 513
  • [30] Deep Multi-Scale Attention Hashing Network for Large-Scale Image Retrieval
    Feng H.
    Wang N.
    Tang J.
    Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science), 2022, 50 (04): : 35 - 45