Learning to Hash on Partial Multi-Modal Data

被引:0
|
作者
Wang, Qifan [1 ]
Si, Luo [1 ]
Shen, Bin [1 ]
机构
[1] Purdue Univ, Dept Comp Sci, W Lafayette, IN 47907 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hashing approach becomes popular for fast similarity search in many large scale applications. Real world data are usually with multiple modalities or having different representations from multiple sources. Various hashing methods have been proposed to generate compact binary codes from multi-modal data. However, most existing multi-modal hashing techniques assume that each data example appears in all modalities, or at least there is one modality containing all data examples. But in real applications, it is often the case that every modality suffers from the missing of some data and therefore results in many partial examples, i.e., examples with some modalities missing. In this paper, we present a novel hashing approach to deal with Partial Multi-Modal data. In particular, the hashing codes are learned by simultaneously ensuring the data consistency among different modalities via latent subspace learning, and preserving data similarity within the same modality through graph Laplacian. We then further improve the codes via orthogonal rotation based on the orthogonal invariant property of our formulation. Experiments on two multi-modal datasets demonstrate the superior performance of the proposed approach over several state-of-the-art multi-modal hashing methods.
引用
收藏
页码:3904 / 3910
页数:7
相关论文
共 50 条
  • [41] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142
  • [42] Longitudinal and Multi-Modal Data Learning for Parkinson's Disease Diagnosis
    Huang, Zhongwei
    Lei, Haijun
    Zhao, Yujia
    Zhou, Feng
    Yan, Jin
    Elazab, Ahmed
    Lei, Baiying
    2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 1411 - 1414
  • [43] A multi-modal heterogeneous data mining algorithm using federated learning
    Wei, Xianyong
    Journal of Engineering, 2021, 2021 (08): : 458 - 466
  • [44] Unequal adaptive visual recognition by learning from multi-modal data
    Cai, Ziyun
    Zhang, Tengfei
    Jing, Xiao-Yuan
    Shao, Ling
    INFORMATION SCIENCES, 2022, 600 : 1 - 21
  • [45] Special issue on multi-modal information learning and analytics on big data
    Ma, Xiaomeng
    Sun, Yan
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (05): : 3299 - 3300
  • [46] Suppressing simulation bias in multi-modal data using transfer learning
    Kustowski, Bogdan
    Gaffney, Jim A.
    Spears, Brian K.
    Anderson, Gemma J.
    Anirudh, Rushil
    Bremer, Peer-Timo
    Thiagarajan, Jayaraman J.
    Kruse, Michael K. G.
    Nora, Ryan C.
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2022, 3 (01):
  • [47] Interpretable multi-modal data integration
    Daniel Osorio
    Nature Computational Science, 2022, 2 : 8 - 9
  • [48] RetrievalMMT: Retrieval-Constrained Multi-Modal Prompt Learning for Multi-Modal Machine Translation
    Wang, Yan
    Zeng, Yawen
    Liang, Junjie
    Xing, Xiaofen
    Xu, Jin
    Xu, Xiangmin
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 860 - 868
  • [49] Deep Multi-lnstance Learning Using Multi-Modal Data for Diagnosis for Lymphocytosis
    Sahasrabudhe, Mihir
    Sujobert, Pierre
    Zacharaki, Evangelia, I
    Maurin, Eugenie
    Grange, Beatrice
    Jallades, Laurent
    Paragios, Nikos
    Vakalopoulou, Maria
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (06) : 2125 - 2136
  • [50] Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning Approach
    Rudovic, Ognjen
    Zhang, Meiru
    Schuller, Bjorn
    Picard, Rosalind W.
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 6 - 15