Deep feature learning with attributes for cross-modality person re-identification

被引:10
|
作者
Zhang, Shikun [1 ]
Chen, Changhong [1 ]
Song, Wanru [1 ]
Gan, Zongliang [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Telecommun & Informat Engn, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
cross-modality person re-identification; attributes; modality invariant; feature extraction;
D O I
10.1117/1.JEI.29.3.033017
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Cross-modality person re-identification (Re-ID) between RGB and infrared domains is a hot and challenging problem, which aims to retrieve pedestrian images cross-modality and cross-camera views. Since there is a huge gap between two modalities, the difficulty of solving the problem is how to bridge the cross-modality gap with images. However, most approaches solve this issue mainly by increasing interclass discrepancy between features, and few research studies focus on decreasing intraclass cross-modality discrepancy, which is crucial for cross-modality Re-ID. Moreover, we find that despite the huge gap, the attribute representations of the pedestrian are generally unchanged. We provide a different view of the cross-modality person Re-ID problem, which uses additional attribute labels as auxiliary information to increase intraclass cross-modality similarity. First, we manually annotate attribute labels for a large-scale cross-modality Re-ID dataset. Second, we propose an end-to-end network to learn modality-invariant and identity-specific local features with the joint supervision of attribute classification loss and identity classification loss. The experimental results on a large-scale cross-modality Re-ID benchmarks show that our model achieves competitive Re-ID performance compared with the state-of-the-art methods. To demonstrate the versatility of the model, we report the results of our model on the Market-1501 dataset. (C) 2020 SPIE and IS&T
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Efficient Shared Feature Learning for Cross-modality Person Re-identification
    Song, Wanru
    Wang, Xinyi
    Liu, Feng
    [J]. 2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 100 - 105
  • [2] HPILN: a feature learning framework for cross-modality person re-identification
    Zhao, Yun-Bo
    Lin, Jian-Wu
    Xuan, Qi
    Xi, Xugang
    [J]. IET IMAGE PROCESSING, 2019, 13 (14) : 2897 - 2904
  • [3] Hierarchical Feature Fusion for Cross-Modality Person Re-identification
    Fu, Wen
    Lim, Monghao
    [J]. International Journal of Pattern Recognition and Artificial Intelligence, 2024, 38 (16)
  • [4] Dynamic feature weakening for cross-modality person re-identification*
    Lu, Jian
    Chen, Mengdie
    Wang, Hangying
    Pang, Feifei
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2023, 109
  • [5] Dual Mutual Learning for Cross-Modality Person Re-Identification
    Zhang, Demao
    Zhang, Zhizhong
    Ju, Ying
    Wang, Cong
    Xie, Yuan
    Qu, Yanyun
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) : 5361 - 5373
  • [6] Global and Part Feature Fusion for Cross-Modality Person Re-Identification
    Wang, Xianju
    Cordova, Ronald S.
    [J]. IEEE ACCESS, 2022, 10 : 122038 - 122046
  • [7] Unbiased feature enhancement framework for cross-modality person re-identification
    Yuan, Bowen
    Chen, Bairu
    Tan, Zhiyi
    Shao, Xi
    Bao, Bing-Kun
    [J]. MULTIMEDIA SYSTEMS, 2022, 28 (03) : 749 - 759
  • [8] Unbiased feature enhancement framework for cross-modality person re-identification
    Bowen Yuan
    Bairu Chen
    Zhiyi Tan
    Xi Shao
    Bing-Kun Bao
    [J]. Multimedia Systems, 2022, 28 : 749 - 759
  • [9] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    [J]. IMAGE AND VISION COMPUTING, 2024, 148
  • [10] Enhancing the discriminative feature learning for visible-thermal cross-modality person re-identification
    Liu, Haijun
    Cheng, Jian
    Wang, Wen
    Su, Yanzhou
    Bai, Haiwei
    [J]. NEUROCOMPUTING, 2020, 398 : 11 - 19