Disentangled body features for clothing change person re-identification

被引:4
|
作者
Ding, Yongkang [1 ]
Wu, Yinghao [1 ]
Wang, Anqi [1 ]
Gong, Tiantian [1 ]
Zhang, Liyan [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; Clothes-changing scenarios; Vision transformer; Semantic segmentation; Disentangled features;
D O I
10.1007/s11042-024-18440-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of computer vision and deep learning technology, person re-identification(ReID) has attracted widespread attention as an important research area. Most current ReID methods primarily focus on short-term re-identification. In the scenario of pedestrian clothing changes, traditional ReID methods face some challenges due to significant changes in pedestrian appearance. Therefore, this paper proposes a clothes-changing person re-identification(CC-ReID) method, namely SViT-ReID, based on a Vision Transformer and incorporating semantic information. This method integrates semantic segmentation maps to more accurately extract features and representations of pedestrian instances in complex scenes, enabling the model to learn some clues unrelated to clothing. Specifically, we extract clothing-unrelated features (such as the face, arms, legs, and feet) from pedestrian parsing tasks' obtained features. These features are then fused with global features to emphasize the importance of these body features. In addition, the complete semantic features derived from pedestrian parsing are fused with global features. These fused features undergo shuffle and grouping operations to generate local features, which are computed in parallel with global features, thereby enhancing the model's robustness and accuracy. Experimental evaluations on two real-world benchmarks show the proposed SViT-ReID achieves state-of-the-art performance. Extensive ablation studies and visualizations illustrate the effectiveness of our method. Our method achieves a Top-1 accuracy of 55.2% and 43.4% on the PRCC and LTCC datasets, respectively.
引用
收藏
页码:69693 / 69714
页数:22
相关论文
共 50 条
  • [31] Fusion of multiple channel features for person re-identification
    Wang, Xuekuan
    Zhao, Cairong
    Miao, Duoqian
    Wei, Zhihua
    Zhang, Renxian
    Ye, Tingfei
    NEUROCOMPUTING, 2016, 213 : 125 - 136
  • [32] Person Re-identification Based on Fused Attribute Features
    Shao X.-W.
    Shuai H.
    Liu Q.-S.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (02): : 564 - 571
  • [33] Extensive Comparison of Visual Features for Person Re-identification
    Wang, Guanzhong
    Fang, Yikai
    Wang, Jinqiao
    Sun, Jian
    8TH INTERNATIONAL CONFERENCE ON INTERNET MULTIMEDIA COMPUTING AND SERVICE (ICIMCS2016), 2016, : 192 - 196
  • [34] Learned versus Handcrafted Features for Person Re-identification
    Chahla, C.
    Snoussi, H.
    Abdallah, F.
    Dornaika, F.
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2020, 34 (04)
  • [35] Person Re-identification by Discriminatively Selecting Parts and Features
    Bhuiyan, Amran
    Perina, Alessandro
    Murino, Vittorio
    COMPUTER VISION - ECCV 2014 WORKSHOPS, PT III, 2015, 8927 : 147 - 161
  • [36] Person re-identification with data-driven features
    Li, Xiang, 1600, Springer Verlag (8833):
  • [37] Person Re-identification
    Bak, Slawomir
    Bremond, Francois
    ERCIM NEWS, 2013, (95): : 33 - 34
  • [38] DeepDiff: Learning deep difference features on human body parts for person re-identification
    Huang, Yan
    Sheng, Hao
    Zheng, Yanwei
    Xiong, Zhang
    NEUROCOMPUTING, 2017, 241 : 191 - 203
  • [39] Learning deep features from body and parts for person re-identification in camera networks
    Zhong Zhang
    Tongzhen Si
    EURASIP Journal on Wireless Communications and Networking, 2018
  • [40] Learning deep features from body and parts for person re-identification in camera networks
    Zhang, Zhong
    Si, Tongzhen
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2018,