Unbiased Feature Learning with Causal Intervention for Visible-Infrared Person Re-Identification

被引:0
|
作者
Yuan, Bo wen [1 ]
Lu, Jiahao [1 ]
You, Sisi [1 ]
Bao, Bing-kun [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Visible-infrared person re-identification; cross modality; causal inference; backdoor adjustment;
D O I
10.1145/3674737
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible-infrared person re-identification (VI-ReID) aims to match individuals across different modalities. Existing methods can learn class-separable features but still struggle with modality gaps within class due to the modality-specific information, which is discriminative in one modality but not present in another (e.g., a black striped shirt). The presence of the interfering information creates a spurious correlation with the class label, which hinders alignment across modalities. To this end, we propose an Unbiased feature learning method based on Causal inTervention for VI-ReID from three aspects. Firstly, through the proposed structural causal graph, we demonstrate that modality-specific information acts as a confounder that restricts the intra-class feature alignment. Secondly, we propose a causal intervention method to remove the confounder using an effective approximation of backdoor adjustment, which involves adjusting the spurious correlation between features and labels. Thirdly, we incorporate the proposed approximation method into the basic VI-ReID model. Specifically, the confounder can be removed by adjusting the extracted features with a set of weighted pre-trained class prototypes from different modalities, where the weight is adapted based on the features. Extensive experiments on the SYSU-MM01 and RegDB datasets demonstrate that our method outperforms state-of-the-art methods. Code is available at https://github.com/NJUPT-MCC/UCT.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Adaptive Middle Modality Alignment Learning for Visible-Infrared Person Re-identification
    Zhang, Yukang
    Yan, Yan
    Lu, Yang
    Wang, Hanzi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, : 2176 - 2196
  • [32] Style-Agnostic Representation Learning for Visible-Infrared Person Re-Identification
    Wu, Jianbing
    Liu, Hong
    Shi, Wei
    Liu, Mengyuan
    Li, Wenhao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2263 - 2275
  • [33] Towards a Unified Middle Modality Learning for Visible-Infrared Person Re-Identification
    Zhang, Yukang
    Yan, Yan
    Lu, Yang
    Wang, Hanzi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 788 - 796
  • [34] Visible-infrared person re-identification via specific and shared representations learning
    Aihua Zheng
    Juncong Liu
    Zi Wang
    Lili Huang
    Chenglong Li
    Bing Yin
    Visual Intelligence, 1 (1):
  • [35] Modality-agnostic learning for robust visible-infrared person re-identification
    Gong, Shengrong
    Li, Shuomin
    Xie, Gengsheng
    Yao, Yufeng
    Zhong, Shan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (03)
  • [36] Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Zhang, Yiyuan
    Kang, Yuhao
    Zhao, Sanyuan
    Shen, Jianbing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1554 - 1565
  • [37] Learning Modality-Specific Representations for Visible-Infrared Person Re-Identification
    Feng, Zhanxiang
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 579 - 590
  • [38] Cross-modality consistency learning for visible-infrared person re-identification
    Shao, Jie
    Tang, Lei
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [39] Multi-Stage Auxiliary Learning for Visible-Infrared Person Re-Identification
    Zhang, Huadong
    Cheng, Shuli
    Du, Anyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 12032 - 12047
  • [40] Attention-enhanced feature mapping network for visible-infrared person re-identification
    Liu, Shuaiyi
    Han, Ke
    MACHINE VISION AND APPLICATIONS, 2025, 36 (02)