Mind the Gap: Learning Modality-Agnostic Representations With a Cross-Modality UNet

被引:0
|
作者
Niu, Xin [1 ,2 ]
Li, Enyi [1 ,2 ]
Liu, Jinchao [1 ,2 ]
Wang, Yan [3 ]
Osadchy, Margarita [4 ]
Fang, Yongchun [1 ,2 ]
机构
[1] Nankai Univ, Engn Res Ctr Trusted Behav Intelligence, Tianjin Key Lab Intelligent Robot, Minist Educ, Tianjin 300350, Peoples R China
[2] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[3] VisionMetric Ltd, Canterbury CT2 7FG, Kent, England
[4] Univ Haifa, Dept Comp Sci, IL-3498838 Haifa, Israel
基金
中国国家自然科学基金;
关键词
Representation learning; deep learning; cross-modality UNet; heterogeneous face recognition; vibrational spectrum matching; person re-identification; HETEROGENEOUS FACE RECOGNITION;
D O I
10.1109/TIP.2023.3348656
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modality recognition has many important applications in science, law enforcement and entertainment. Popular methods to bridge the modality gap include reducing the distributional differences of representations of different modalities, learning indistinguishable representations or explicit modality transfer. The first two approaches suffer from the loss of discriminant information while removing the modality-specific variations. The third one heavily relies on the successful modality transfer, could face catastrophic performance drop when explicit modality transfers are not possible or difficult. To tackle this problem, we proposed a compact encoder-decoder neural module (cmUNet) to learn modality-agnostic representations while retaining identity-related information. This is achieved through cross-modality transformation and in-modality reconstruction, enhanced by an adversarial/perceptual loss which encourages indistinguishability of representations in the original sample space. For cross-modality matching, we propose MarrNet where cmUNet is connected to a standard feature extraction network which takes as inputs the modality-agnostic representations and outputs similarity scores for matching. We validated our method on five challenging tasks, namely Raman-infrared spectrum matching, cross-modality person re-identification and heterogeneous (photo-sketch, visible-near infrared and visible-thermal) face recognition, where MarrNet showed superior performance compared to state-of-the-art methods. Furthermore, it is observed that a cross-modality matching method could be biased to extract discriminant information from partial or even wrong regions, due to incompetence of dealing with modality gaps, which subsequently leads to poor generalization. We show that robustness to occlusions can be an indicator of whether a method can well bridge the modality gap. This, to our knowledge, has been largely neglected in the previous works. Our experiments demonstrated that MarrNet exhibited excellent robustness against disguises and occlusions, and outperformed existing methods with a large margin (>10%). The proposed cmUNet is a meta-approach and can be used as a building block for various applications.
引用
收藏
页码:655 / 670
页数:16
相关论文
共 50 条
  • [1] Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation
    Konwer, Aishik
    Hu, Xiaoling
    Bae, Joseph
    Xu, Xuan
    Chen, Chao
    Prasanna, Prateek
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21358 - 21368
  • [2] MARS: Learning Modality-Agnostic Representation for Scalable Cross-Media Retrieval
    Wang, Yunbo
    Peng, Yuxin
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4765 - 4777
  • [3] Modality-Agnostic Topology Aware Localization
    Zanjani, Farhad G.
    Karmanov, Ilia
    Ackermann, Hanno
    Dijkman, Daniel
    Merlin, Simone
    Welling, Max
    Porikli, Fatih
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Cross-Modal Federated Human Activity Recognition via Modality-Agnostic and Modality-Specific Representation Learning
    Yang, Xiaoshan
    Xiong, Baochen
    Huang, Yi
    Xu, Changsheng
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3063 - 3071
  • [5] LXMERT: Learning Cross-Modality Encoder Representations from Transformers
    Tan, Hao
    Bansal, Mohit
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5100 - 5111
  • [6] Modality-Agnostic Debiasing for Single Domain Generalization
    Qu, Sanqing
    Pan, Yingwei
    Chen, Guang
    Yao, Ting
    Jiang, Changjun
    Mei, Tao
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24142 - 24151
  • [7] Modality-Agnostic Learning for Radar-Lidar Fusion in Vehicle Detection
    Li, Yu-Jhe
    Park, Jinhyung
    O'Toole, Matthew
    Kitani, Kris
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 908 - 917
  • [8] Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning
    Tran, Thi-Dung
    Ho, Ngoc-Huynh
    Pant, Sudarshan
    Yang, Hyung-Jeong
    Kim, Soo-Hyung
    Lee, Gueesang
    [J]. IEEE ACCESS, 2023, 11 : 56634 - 56648
  • [9] Learning Cross-Modality Representations From Multi-Modal Images
    van Tulder, Gijs
    de Bruijne, Marleen
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (02) : 638 - 648
  • [10] Representation Learning for Cross-Modality Classification
    van Tulder, Gijs
    de Bruijne, Marleen
    [J]. MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, 2017, 10081 : 126 - 136