Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification

被引:47
|
作者
Feng, Jiawei [1 ]
Wu, Ancong [1 ]
Zhen, Wei-Shi [1 ,2 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Guangzhou, Peoples R China
[3] Guangdong Key Lab Informat Secur Technol, Guangzhou, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR52729.2023.02179
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the modality gap between visible and infrared images with high visual ambiguity, learning diverse modality-shared semantic concepts for visible-infrared person reidentification (VI-ReID) remains a challenging problem. Body shape is one of the significant modality-shared cues for VI-ReID. To dig more diverse modality-shared cues, we expect that erasing body-shape-related semantic concepts in the learned features can force the ReID model to extract more and other modality-shared features for identification. To this end, we propose shape-erased feature learning paradigm that decorrelates modality-shared features in two orthogonal subspaces. Jointly learning shape-related feature in one subspace and shape-erased features in the orthogonal complement achieves a conditional mutual information maximization between shape-erased feature and identity discarding body shape information, thus enhancing the diversity of the learned representation explicitly. Extensive experiments on SYSU-MM01, RegDB, and HITSZ-VCM datasets demonstrate the effectiveness of our method.
引用
收藏
页码:22752 / 22761
页数:10
相关论文
共 50 条
  • [31] Adaptive Middle Modality Alignment Learning for Visible-Infrared Person Re-identification
    Zhang, Yukang
    Yan, Yan
    Lu, Yang
    Wang, Hanzi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, : 2176 - 2196
  • [32] Style-Agnostic Representation Learning for Visible-Infrared Person Re-Identification
    Wu, Jianbing
    Liu, Hong
    Shi, Wei
    Liu, Mengyuan
    Li, Wenhao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2263 - 2275
  • [33] Towards a Unified Middle Modality Learning for Visible-Infrared Person Re-Identification
    Zhang, Yukang
    Yan, Yan
    Lu, Yang
    Wang, Hanzi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 788 - 796
  • [34] Visible-infrared person re-identification via specific and shared representations learning
    Aihua Zheng
    Juncong Liu
    Zi Wang
    Lili Huang
    Chenglong Li
    Bing Yin
    Visual Intelligence, 1 (1):
  • [35] Modality-agnostic learning for robust visible-infrared person re-identification
    Gong, Shengrong
    Li, Shuomin
    Xie, Gengsheng
    Yao, Yufeng
    Zhong, Shan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (03)
  • [36] Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Zhang, Yiyuan
    Kang, Yuhao
    Zhao, Sanyuan
    Shen, Jianbing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1554 - 1565
  • [37] Learning Modality-Specific Representations for Visible-Infrared Person Re-Identification
    Feng, Zhanxiang
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 579 - 590
  • [38] Cross-modality consistency learning for visible-infrared person re-identification
    Shao, Jie
    Tang, Lei
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [39] Multi-Stage Auxiliary Learning for Visible-Infrared Person Re-Identification
    Zhang, Huadong
    Cheng, Shuli
    Du, Anyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 12032 - 12047
  • [40] Attention-enhanced feature mapping network for visible-infrared person re-identification
    Liu, Shuaiyi
    Han, Ke
    MACHINE VISION AND APPLICATIONS, 2025, 36 (02)