Unbiased Feature Learning with Causal Intervention for Visible-Infrared Person Re-Identification

被引:0
|
作者
Yuan, Bo wen [1 ]
Lu, Jiahao [1 ]
You, Sisi [1 ]
Bao, Bing-kun [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Visible-infrared person re-identification; cross modality; causal inference; backdoor adjustment;
D O I
10.1145/3674737
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible-infrared person re-identification (VI-ReID) aims to match individuals across different modalities. Existing methods can learn class-separable features but still struggle with modality gaps within class due to the modality-specific information, which is discriminative in one modality but not present in another (e.g., a black striped shirt). The presence of the interfering information creates a spurious correlation with the class label, which hinders alignment across modalities. To this end, we propose an Unbiased feature learning method based on Causal inTervention for VI-ReID from three aspects. Firstly, through the proposed structural causal graph, we demonstrate that modality-specific information acts as a confounder that restricts the intra-class feature alignment. Secondly, we propose a causal intervention method to remove the confounder using an effective approximation of backdoor adjustment, which involves adjusting the spurious correlation between features and labels. Thirdly, we incorporate the proposed approximation method into the basic VI-ReID model. Specifically, the confounder can be removed by adjusting the extracted features with a set of weighted pre-trained class prototypes from different modalities, where the weight is adapted based on the features. Extensive experiments on the SYSU-MM01 and RegDB datasets demonstrate that our method outperforms state-of-the-art methods. Code is available at https://github.com/NJUPT-MCC/UCT.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Fine-grained Learning for Visible-Infrared Person Re-identification
    Qi, Mengzan
    Chan, Sixian
    Hang, Chen
    Zhang, Guixu
    Li, Zhi
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2417 - 2422
  • [22] Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification
    Yang, Mouxing
    Huang, Zhenyu
    Hu, Peng
    Li, Taihao
    Lv, Jiancheng
    Peng, Xi
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14288 - 14297
  • [23] Attributes Based Visible-Infrared Person Re-identification
    Zheng, Aihua
    Feng, Mengya
    Pan, Peng
    Jiang, Bo
    Luo, Bin
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 254 - 266
  • [24] Minimizing Maximum Feature Space Deviation for Visible-Infrared Person Re-Identification
    Wu, Zhixiong
    Wen, Tingxi
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [25] Joint Modal Alignment and Feature Enhancement for Visible-Infrared Person Re-Identification
    Lin, Ronghui
    Wang, Rong
    Zhang, Wenjing
    Wu, Ao
    Bi, Yihan
    SENSORS, 2023, 23 (11)
  • [26] Feature-Level Compensation and Alignment for Visible-Infrared Person Re-Identification
    Dong, Husheng
    Lu, Ping
    Yang, Yuanfeng
    Sun, Xun
    IET COMPUTER VISION, 2025, 19 (01)
  • [27] Dual-granularity feature fusion in visible-infrared person re-identification
    Cai, Shuang
    Yang, Shanmin
    Hu, Jing
    Wu, Xi
    IET IMAGE PROCESSING, 2024, 18 (04) : 972 - 980
  • [28] Interaction and Alignment for Visible-Infrared Person Re-Identification
    Gong, Jiahao
    Zhao, Sanyuan
    Lam, Kin-Man
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2253 - 2259
  • [29] Semantic consistent feature construction and multi-granularity feature learning for visible-infrared person re-identification
    Yiming Wang
    Kaixiong Xu
    Yi Chai
    Yutao Jiang
    Guanqiu Qi
    The Visual Computer, 2024, 40 : 2363 - 2379
  • [30] Semantic consistent feature construction and multi-granularity feature learning for visible-infrared person re-identification
    Wang, Yiming
    Xu, Kaixiong
    Chai, Yi
    Jiang, Yutao
    Qi, Guanqiu
    VISUAL COMPUTER, 2024, 40 (04): : 2363 - 2379