Cross-modality Person Re-identification Based on Joint Constraints of Image and Feature

被引:0
|
作者
Zhang Y.-K. [1 ,2 ]
Tan L. [1 ,2 ]
Chen J.-Y. [1 ,2 ]
机构
[1] National Engineering Laboratory for Big Data for Education, Central China Normal University, Wuhan
[2] National Engineering Research Center for E-Learning, Central China Normal University, Wuhan
来源
基金
中国国家自然科学基金;
关键词
Cross modality; Joint constraint; Middle modality; Person re-identification;
D O I
10.16383/j.aas.c200184
中图分类号
学科分类号
摘要
In recent years, the research of person re-identification based on visible and near-infrared has attracted widespread attention from the industry. The existing methods mainly use the mutual conversion between them to reduce the difference between their modalities. However, due to the problem of data independence and different distribution between visible image and near-infrared image, there is a large difference between the converted image and the real image, which leads to further improvement of this method. Therefore, this paper proposes a middle modality of conversion between visible and near-infrared modality. So visible and near-infrared can be seamlessly transferred, realizing the identity consistency of person and reducing the difference of conversion between modalities. In addition, considering the scarcity of cross modality person re-identification dataset, this paper also constructs a cross modality person re-identification dataset, and proves the effectiveness of the proposed method through a large number of experiments. In the All-Search Single-shot mode on the SYSU-MM01 dataset, the result of the proposed method is 4.2 % and 3.7 % higher than Rank1 and mAP using the D2RL algorithm, respectively. Compared with ResNet-50 algorithm, the result of the proposed method on the Parking-01 dataset constructed in this paper is 10.4 % and 10.4 % higher in Rank-1 and mAP respectively. Copyright © 2021 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:1943 / 1950
页数:7
相关论文
共 23 条
  • [1] Ye Yu, Wang Zheng, Liang Chao, Han Zhen, Chen Jun, Hu Rui-Min, A survey on multi-source person re-identification, Acta Automatica Sinica, 46, 9, pp. 1869-1884, (2020)
  • [2] LUO Hao, JIANG Wei, FAN Xing, ZHANG Si-Peng, A Survey on Deep Learning Based Person Re-identification, ACTA AUTOMATICA SINICA, 45, 11, pp. 2032-2049, (2019)
  • [3] Zhou Yong, Wang Han-Zheng, Zhao Jia-Qi, Chen Ying, Yao Rui, Chen Si-Lin, Interpretable attention part model for person re-identification, Acta Automatica Sinica, 41, x, pp. 1-13, (2020)
  • [4] LI You-Jiao, ZHUO Li, ZHANG Jing, LI Jia-Feng, A Survey of Person Re-identification, ACTA AUTOMATICA SINICA, 44, 9, pp. 1554-1568, (2018)
  • [5] Zhao H, Tian M, Sun S, Et al., Spindle net: Person re-identification with human body region guided feature decomposition and fusion, Proceedings of the IEEE CVPR, pp. 1077-1085, (2017)
  • [6] Sun Y, Zheng L, Yang Y, Et al., Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline), Proceedings of the ECCV, pp. 480-496, (2018)
  • [7] Hermans A, Beyer L, Leibe B., In defense of the triplet loss for person re-identification, (2017)
  • [8] Wei L, Zhang S, Gao W, Et al., Person transfer gan to bridge domain gap for person re-identification, Proceedings of the IEEE CVPR, pp. 79-88, (2018)
  • [9] Wu A, Zheng W S, Yu H X, Et al., RGB-infrared cross-modality person re-identification, Proceedings of the IEEE ICCV, pp. 5380-5389, (2017)
  • [10] Ye M, Wang Z, Lan X, Et al., visible Thermal person re-identification via dual-constrained top-ranking, Proceeding of IJCAI, 1, (2018)