Learning Coupled Feature Spaces for Cross-modal Matching

被引:171
|
作者
Wang, Kaiye [1 ]
He, Ran [1 ]
Wang, Wei [1 ]
Wang, Liang [1 ]
Tan, Tieniu [1 ]
机构
[1] Chinese Acad Sci, Ctr Res Intelligent Percept & Comp, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
关键词
FACE RECOGNITION; REGRESSION;
D O I
10.1109/ICCV.2013.261
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal matching has recently drawn much attention due to the widespread existence of multimodal data. It aims to match data from different modalities, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous works mainly focus on solving the first problem. In this paper, we propose a novel coupled linear regression framework to deal with both problems. Our method learns two projection matrices to map multimodal data into a common feature space, in which cross-modal data matching can be performed. And in the learning procedure, the l(21)-norm penalties are imposed on the two projection matrices separately, which leads to select relevant and discriminative features from coupled feature spaces simultaneously. A trace norm is further imposed on the projected data as a low-rank constraint, which enhances the relevance of different modal data with connections. We also present an iterative algorithm based on half-quadratic minimization to solve the proposed regularized linear regression problem. The experimental results on two challenging cross-modal datasets demonstrate that the proposed method outperforms the state-of-the-art approaches.
引用
收藏
页码:2088 / 2095
页数:8
相关论文
共 50 条
  • [1] Deep Coupled Metric Learning for Cross-Modal Matching
    Liong, Venice Erin
    Lu, Jiwen
    Tan, Yap-Peng
    Zhou, Jie
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (06) : 1234 - 1244
  • [2] COUPLED DICTIONARY LEARNING AND FEATURE MAPPING FOR CROSS-MODAL RETRIEVAL
    Xu, Xing
    Shimada, Atsushi
    Taniguchi, Rin-ichiro
    He, Li
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,
  • [3] Generalized Coupled Dictionary Learning Approach With Applications to Cross-Modal Matching
    Mandal, Devraj
    Biswas, Soma
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (08) : 3826 - 3837
  • [4] Large Margin Coupled Feature Learning for Cross-Modal Face Recognition
    Jin, Yi
    Lu, Jiwen
    Ruan, Qiuqi
    [J]. 2015 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2015, : 286 - 292
  • [5] Quaternion Representation Learning for cross-modal matching
    Wang, Zheng
    Xu, Xing
    Wei, Jiwei
    Xie, Ning
    Shao, Jie
    Yang, Yang
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 270
  • [6] Learning with Noisy Correspondence for Cross-modal Matching
    Huang, Zhenyu
    Niu, Guocheng
    Liu, Xiao
    Ding, Wenbiao
    Xiao, Xinyan
    Wu, Hua
    Peng, Xi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Cross-Modal feature description for remote sensing image matching
    Li, Liangzhi
    Liu, Ming
    Ma, Lingfei
    Han, Ling
    [J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2022, 112
  • [8] Cross-Modal Image-Text Matching via Coupled Projection Learning Hashing
    Zhao, Huan
    Wang, Haoqian
    Zha, Xupeng
    Wang, Song
    [J]. 2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 367 - 376
  • [9] Disentangled Representation Learning for Cross-Modal Biometric Matching
    Ning, Hailong
    Zheng, Xiangtao
    Lu, Xiaoqiang
    Yuan, Yuan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1763 - 1774
  • [10] CROSS-MODAL MATCHING IN MONKEY
    ETTLINGER, G
    BLAKEMORE, CB
    [J]. NEUROPSYCHOLOGIA, 1967, 5 (02) : 147 - +