Multi-scale feature representation for person re-identification

被引:0
|
作者
Lu J. [1 ]
Wang H.-Y. [1 ]
Chen X. [2 ]
Zhang K.-B. [1 ]
Liu W. [1 ]
机构
[1] School of Electronics and Information, Xi'an Polytechnic University, Xi'an
[2] Institute of Information Technology, Nantong Normal College, Nantong
来源
Kongzhi yu Juece/Control and Decision | 2021年 / 36卷 / 12期
关键词
DukeMTMC-reID; Global feature; Local feature; Market-1501; Person re-identification; TriHard loss;
D O I
10.13195/j.kzyjc.2020.0952
中图分类号
学科分类号
摘要
The strategy of pedestrian representation merging global features with local features is frequently utilized to improve the discriminability of the model for person re-identification (re-ID) in complex scenes. However, extracting local features generally necessitates specialized models for specific semantic regions, which increases the complexity of the algorithm. Therefore, a re-ID model is proposed based on multi-scale feature learning. The model acquires discrimination information of multi-level complementary with the combination of different fine-grained local features and global features, and then to realize end-to-end person re-identification. In order to obtain high-resolution information while retaining more detailed information, max pooling and average pooling are employed to downsample the features. In addition, this paper introduces TriHard loss to constrain global features and uses random erasure to enhance the data, which further ameliorates the adaptability of the model in complex scenes. Comparative experiments on Market-1501 and DukeMTMC-reID datasets show that the accuracy of rank-1 has reached 94.9% and 87.1%, respectively, which verifies the effectiveness of the proposed method. © 2021, Editorial Office of Control and Decision. All right reserved.
引用
收藏
页码:3015 / 3022
页数:7
相关论文
共 36 条
  • [1] Fan H H, Zheng L, Yan C G, Et al., Unsupervised person re-identification, ACM Transactions on Multimedia Computing, Communications, and Applications, 14, 4, pp. 1-18, (2018)
  • [2] Matsukawa T, Okabe T, Suzuki E, Et al., Hierarchical Gaussian descriptor for person re-identification, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1363-1372, (2016)
  • [3] Chang X B, Hospedales T M, Xiang T., Multi-level factorisation net for person re-identification, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2109-2118, (2018)
  • [4] Zheng L, Yang Y, Hauptmann A G., Person re-identification: Past, present and future, (2016)
  • [5] Cheng D, Gong Y H, Zhou S P, Et al., Person re-identification by multi-channel parts-based CNN with improved triplet loss function, IEEE Computer Vision and Pattern Recognition, pp. 1335-1344, (2016)
  • [6] Zhao H Y, Tian M Q, Sun S Y, Et al., Spindle net: Person re-identification with human body region guided feature decomposition and fusion, IEEE Conference on Computer Vision and Pattern Recognition, pp. 907-915, (2017)
  • [7] Su C, Li J N, Zhang S L, Et al., Pose-driven deep convolutional model for person re-identification, IEEE International Conference on Computer Vision, pp. 3980-3989, (2017)
  • [8] Yao H T, Zhang S L, Hong R C, Et al., Deep representation learning with part loss for person re-identification, IEEE Transactions on Image Processing, 28, 6, pp. 2860-2871, (2019)
  • [9] Sun Y F, Zheng L, Yang Y, Et al., Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline)[J/OL], (2017)
  • [10] Chen Y B, Zhu X T, Gong S G, Et al., Person re-identification by deep learning multi-scale representations, IEEE International Conference on Computer Vision Workshop, pp. 2590-2600, (2017)