Learning enhancing modality-invariant features for visible-infrared person re-identification

被引:1
|
作者
Zhang, La [1 ]
Zhao, Xu [2 ]
Du, Haohua [3 ]
Sun, Jian [1 ]
Wang, Jinqiao [2 ]
机构
[1] Beijing Inst Technol, Beijing 100081, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
[3] Beihang Univ, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Visible-infrared person re-identification; Cross-modality; Feature learning; Feature distribution; RETRIEVAL; MODEL;
D O I
10.1007/s13042-024-02168-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the task of visible-infrared person re-identification, most existing methods embed all images into a unified feature space through shared parameters, and then use a metric learning loss function to learn modality-invariant features. However, they may encounter the following two problems: For one thing, they mostly focus on modality-invariant features. In reality, some unique features within each modality can enhance feature discriminability but are often overlooked; For another, current metric learning loss functions mainly focus on feature discriminability and only align modality distributions implicitly, which leads to that the feature distributions from different modalities are still inconsistent in this unified feature space. Taking the foregoing into consideration, in this paper, we propose a novel end-to-end framework composed of two modules: the intra-modality enhancing module and the modality-invariant module. The former fully leverages modality-specific characteristics by establishing independent branches for each modality. It improves feature discriminability by further enhancing the intra-class compactness and inter-class discrepancy within each modality. The latter is designed with a cross-modality feature distribution consistency loss based on the Gaussian distribution assumption. It significantly alleviates the modality discrepancies by effectively and directly aligning the feature distribution in the unified feature space. As a result, the proposed framework can learn modality-invariant features with enhancing discriminability in each modality. Extensive experimental results on SYSU-MM01 and RegDB demonstrate the effectiveness of our method.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Beyond a strong baseline: cross-modality contrastive learning for visible-infrared person re-identification
    Pengfei Fang
    Yukang Zhang
    Zhenzhong Lan
    [J]. Machine Vision and Applications, 2023, 34
  • [32] Visible-infrared person re-identification via patch-mixed cross-modality learning
    Qian, Zhihao
    Lin, Yutian
    Du, Bo
    [J]. PATTERN RECOGNITION, 2025, 157
  • [33] Interaction and Alignment for Visible-Infrared Person Re-Identification
    Gong, Jiahao
    Zhao, Sanyuan
    Lam, Kin-Man
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2253 - 2259
  • [34] Beyond modality alignment: Learning part-level representation for visible-infrared person re-identification
    Zhang, Peng
    Wu, Qiang
    Yao, Xunxiang
    Xu, Jingsong
    [J]. IMAGE AND VISION COMPUTING, 2021, 108
  • [35] Dynamic Center Aggregation Loss With Mixed Modality for Visible-Infrared Person Re-Identification
    Kong, Jun
    He, Qibin
    Jiang, Min
    Liu, Tianshan
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 2003 - 2007
  • [36] Visible-Infrared Person Re-Identification via Cross-Modality Interaction Transformer
    Feng, Yujian
    Yu, Jian
    Chen, Feng
    Ji, Yimu
    Wu, Fei
    Liu, Shangdon
    Jing, Xiao-Yuan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7647 - 7659
  • [37] Counterfactual attention alignment for visible-infrared cross-modality person re-identification
    Sun, Zongzhe
    Zhao, Feng
    [J]. PATTERN RECOGNITION LETTERS, 2023, 168 : 79 - 85
  • [38] DMANet: Dual-modality alignment network for visible-infrared person re-identification
    Cheng, Xu
    Deng, Shuya
    Yu, Hao
    Zhao, Guoying
    [J]. PATTERN RECOGNITION, 2025, 157
  • [39] DMA: Dual Modality-Aware Alignment for Visible-Infrared Person Re-Identification
    Cui, Zhenyu
    Zhou, Jiahuan
    Peng, Yuxin
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2696 - 2708
  • [40] FMCNet: Feature-Level Modality Compensation for Visible-Infrared Person Re-Identification
    Zhang, Qiang
    Lai, Changzhou
    Liu, Jianan
    Huang, Nianchang
    Han, Jungong
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7339 - 7348