Unbiased Feature Learning with Causal Intervention for Visible-Infrared Person Re-Identification

被引:0
|
作者
Yuan, Bo wen [1 ]
Lu, Jiahao [1 ]
You, Sisi [1 ]
Bao, Bing-kun [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Visible-infrared person re-identification; cross modality; causal inference; backdoor adjustment;
D O I
10.1145/3674737
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible-infrared person re-identification (VI-ReID) aims to match individuals across different modalities. Existing methods can learn class-separable features but still struggle with modality gaps within class due to the modality-specific information, which is discriminative in one modality but not present in another (e.g., a black striped shirt). The presence of the interfering information creates a spurious correlation with the class label, which hinders alignment across modalities. To this end, we propose an Unbiased feature learning method based on Causal inTervention for VI-ReID from three aspects. Firstly, through the proposed structural causal graph, we demonstrate that modality-specific information acts as a confounder that restricts the intra-class feature alignment. Secondly, we propose a causal intervention method to remove the confounder using an effective approximation of backdoor adjustment, which involves adjusting the spurious correlation between features and labels. Thirdly, we incorporate the proposed approximation method into the basic VI-ReID model. Specifically, the confounder can be removed by adjusting the extracted features with a set of weighted pre-trained class prototypes from different modalities, where the weight is adapted based on the features. Extensive experiments on the SYSU-MM01 and RegDB datasets demonstrate that our method outperforms state-of-the-art methods. Code is available at https://github.com/NJUPT-MCC/UCT.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Counterfactual Intervention Feature Transfer for Visible-Infrared Person Re-identification
    Li, Xulin
    Lu, Yan
    Liu, Bin
    Liu, Yating
    Yin, Guojun
    Chu, Qi
    Huang, Jinyang
    Zhu, Feng
    Zhao, Rui
    Yu, Nenghai
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 381 - 398
  • [2] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Hao Wang
    Xiaojun Bi
    Changdong Yu
    Neural Processing Letters, 56
  • [3] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Wang, Hao
    Bi, Xiaojun
    Yu, Changdong
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [4] Visible-Infrared Person Re-Identification Via Feature Constrained Learning
    Zhang Jing
    Chen Guangfeng
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (12)
  • [5] Progressive Discriminative Feature Learning for Visible-Infrared Person Re-Identification
    Zhou, Feng
    Cheng, Zhuxuan
    Yang, Haitao
    Song, Yifeng
    Fu, Shengpeng
    ELECTRONICS, 2024, 13 (14)
  • [6] Multi-dimensional feature learning for visible-infrared person re-identification
    Yang, Zhenzhen
    Wu, Xinyi
    Yang, Yongpeng
    BIG DATA RESEARCH, 2025, 40
  • [7] Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification
    Feng, Jiawei
    Wu, Ancong
    Zhen, Wei-Shi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22752 - 22761
  • [8] Learning dual attention enhancement feature for visible-infrared person re-identification
    Zhang, Guoqing
    Zhang, Yinyin
    Zhang, Hongwei
    Chen, Yuhao
    Zheng, Yuhui
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 99
  • [9] Identity Feature Disentanglement for Visible-Infrared Person Re-Identification
    Chen, Xiumei
    Zheng, Xiangtao
    Lu, Xiaoqiang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)
  • [10] Diverse-Feature Collaborative Progressive Learning for Visible-Infrared Person Re-Identification
    Chan, Sixian
    Meng, Weihao
    Bai, Cong
    Hu, Jie
    Chen, Shenyong
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (05) : 7754 - 7763