Mind the Gap: Learning Modality-Agnostic Representations With a Cross-Modality UNet

被引:0
|
作者
Niu, Xin [1 ,2 ]
Li, Enyi [1 ,2 ]
Liu, Jinchao [1 ,2 ]
Wang, Yan [3 ]
Osadchy, Margarita [4 ]
Fang, Yongchun [1 ,2 ]
机构
[1] Nankai Univ, Engn Res Ctr Trusted Behav Intelligence, Tianjin Key Lab Intelligent Robot, Minist Educ, Tianjin 300350, Peoples R China
[2] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[3] VisionMetric Ltd, Canterbury CT2 7FG, Kent, England
[4] Univ Haifa, Dept Comp Sci, IL-3498838 Haifa, Israel
基金
中国国家自然科学基金;
关键词
Representation learning; deep learning; cross-modality UNet; heterogeneous face recognition; vibrational spectrum matching; person re-identification; HETEROGENEOUS FACE RECOGNITION;
D O I
10.1109/TIP.2023.3348656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality recognition has many important applications in science, law enforcement and entertainment. Popular methods to bridge the modality gap include reducing the distributional differences of representations of different modalities, learning indistinguishable representations or explicit modality transfer. The first two approaches suffer from the loss of discriminant information while removing the modality-specific variations. The third one heavily relies on the successful modality transfer, could face catastrophic performance drop when explicit modality transfers are not possible or difficult. To tackle this problem, we proposed a compact encoder-decoder neural module (cmUNet) to learn modality-agnostic representations while retaining identity-related information. This is achieved through cross-modality transformation and in-modality reconstruction, enhanced by an adversarial/perceptual loss which encourages indistinguishability of representations in the original sample space. For cross-modality matching, we propose MarrNet where cmUNet is connected to a standard feature extraction network which takes as inputs the modality-agnostic representations and outputs similarity scores for matching. We validated our method on five challenging tasks, namely Raman-infrared spectrum matching, cross-modality person re-identification and heterogeneous (photo-sketch, visible-near infrared and visible-thermal) face recognition, where MarrNet showed superior performance compared to state-of-the-art methods. Furthermore, it is observed that a cross-modality matching method could be biased to extract discriminant information from partial or even wrong regions, due to incompetence of dealing with modality gaps, which subsequently leads to poor generalization. We show that robustness to occlusions can be an indicator of whether a method can well bridge the modality gap. This, to our knowledge, has been largely neglected in the previous works. Our experiments demonstrated that MarrNet exhibited excellent robustness against disguises and occlusions, and outperformed existing methods with a large margin (>10%). The proposed cmUNet is a meta-approach and can be used as a building block for various applications.
引用
收藏
页码:655 / 670
页数:16
相关论文
共 50 条
  • [31] STRATEGIES IN CROSS-MODALITY MATCHING
    MILEWSKI, AE
    IACCINO, J
    [J]. PERCEPTION & PSYCHOPHYSICS, 1982, 31 (03): : 273 - 275
  • [32] Towards Modality-Agnostic Person Re-identification with Descriptive Query
    Chen, Cuiqun
    Ye, Mang
    Jiang, Ding
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15128 - 15137
  • [33] Learning transferable cross-modality representations for few-shot hyperspectral and LiDAR collaborative classification
    Dai, Mofan
    Xing, Shuai
    Xu, Qing
    Wang, Hanyun
    Li, Pengcheng
    Sun, Yifan
    Pan, Jiechen
    Li, Yuqiong
    [J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 126
  • [34] Cross-modality person re-identification via modality-synergy alignment learning
    Lin, Yuju
    Wang, Banghai
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (06)
  • [35] Learning Modality-Invariant Features by Cross-Modality Adversarial Network for Visual Question Answering
    Fu, Ze
    Zheng, Changmeng
    Cai, Yi
    Li, Qing
    Wang, Tao
    [J]. WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 316 - 331
  • [36] Self-attention Cross-modality Fusion Network for Cross-modality Person Re-identification
    Du, Peng
    Song, Yong-Hong
    Zhang, Xin-Yao
    [J]. Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (06): : 1457 - 1468
  • [37] Omni-CNN: A Modality-Agnostic Neural Network for mmWave Beam Selection
    Salehi, Batool
    Roy, Debashri
    Jian, Tong
    Dick, Chris
    Ioannidis, Stratis
    Chowdhury, Kaushik
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (06) : 8169 - 8183
  • [38] MagNET: Modality-Agnostic Network for Brain Tumor Segmentation and Characterization with Missing Modalities
    Konwer, Aishik
    Chen, Chao
    Prasanna, Prateek
    [J]. MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT I, 2024, 14348 : 361 - 371
  • [39] Cross-modality representation learning from transformer for hashtag prediction
    Mian Muhammad Yasir Khalil
    Qingxian Wang
    Bo Chen
    Weidong Wang
    [J]. Journal of Big Data, 10
  • [40] LOCAL CROSS-MODALITY IMAGE ALIGNMENT USING UNSUPERVISED LEARNING
    BERNANDER, O
    KOCH, C
    [J]. LECTURE NOTES IN COMPUTER SCIENCE, 1990, 427 : 573 - 575