Learning to Hash on Partial Multi-Modal Data

被引:0
|
作者
Wang, Qifan [1 ]
Si, Luo [1 ]
Shen, Bin [1 ]
机构
[1] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hashing approach becomes popular for fast similarity search in many large scale applications. Real world data are usually with multiple modalities or having different representations from multiple sources. Various hashing methods have been proposed to generate compact binary codes from multi-modal data. However, most existing multi-modal hashing techniques assume that each data example appears in all modalities, or at least there is one modality containing all data examples. But in real applications, it is often the case that every modality suffers from the missing of some data and therefore results in many partial examples, i.e., examples with some modalities missing. In this paper, we present a novel hashing approach to deal with Partial Multi-Modal data. In particular, the hashing codes are learned by simultaneously ensuring the data consistency among different modalities via latent subspace learning, and preserving data similarity within the same modality through graph Laplacian. We then further improve the codes via orthogonal rotation based on the orthogonal invariant property of our formulation. Experiments on two multi-modal datasets demonstrate the superior performance of the proposed approach over several state-of-the-art multi-modal hashing methods.
引用
收藏
页码:3904 / 3910
页数:7
相关论文
共 50 条
  • [1] Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
    Bashiri, Fereshteh S.
    Baghaie, Ahmadreza
    Rostami, Reihaneh
    Yu, Zeyun
    D'Souza, Roshan M.
    JOURNAL OF IMAGING, 2019, 5 (01)
  • [2] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [3] Learning Shared and Specific Factors for Multi-modal Data
    Yin, Qiyue
    Huang, Yan
    Wu, Shu
    Wang, Liang
    COMPUTER VISION, PT II, 2017, 772 : 89 - 98
  • [4] LEARNING UNIFIED SPARSE REPRESENTATIONS FOR MULTI-MODAL DATA
    Wang, Kaiye
    Wang, Wei
    Wang, Liang
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3545 - 3549
  • [5] Multi-modal Contrastive Learning for Healthcare Data Analytics
    Li, Rui
    Gao, Jing
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 120 - 127
  • [6] Multi-modal learning and its application for biomedical data
    Liu, Jin
    Zhang, Yu-Dong
    Cai, Hongming
    FRONTIERS IN MEDICINE, 2024, 10
  • [7] Learning multi-modal dictionaries: Application to audiovisual data
    Monaci, Gianluca
    Jost, Philippe
    Vandergheynst, Pierre
    Mailhe, Boris
    Lesage, Sylvain
    Gribonval, Remi
    MULTIMEDIA CONTENT REPRESENTATION, CLASSIFICATION AND SECURITY, 2006, 4105 : 538 - 545
  • [8] Learning Concept Taxonomies from Multi-modal Data
    Zhang, Hao
    Hu, Zhiting
    Deng, Yuntian
    Sachani, Mrinmaya
    Yan, Zhicheng
    Xing, Eric P.
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 1791 - 1801
  • [9] Partial Modal Conditioned GANs for Multi-modal Multi-label Learning with Arbitrary Modal-Missing
    Zhang, Yi
    Shen, Jundong
    Zhang, Zhecheng
    Wang, Chongjun
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II, 2021, 12682 : 413 - 428
  • [10] Multi-kernel Partial Least Squares for Multi-Modal Data Analysis
    Wang, Ping
    Zhang, Hong
    PROCEEDINGS OF THE 2016 7TH INTERNATIONAL CONFERENCE ON EDUCATION, MANAGEMENT, COMPUTER AND MEDICINE (EMCM 2016), 2017, 59 : 931 - 935