Person Re-Identification Method Based on Image Style Transfer

被引:0
|
作者
Wang C.-K. [1 ]
Chen Y.-L. [1 ]
Cai X.-D. [2 ]
机构
[1] School of Mechanical and Electrical Engineering, Guilin University of Electronic Technology, Guilin
[2] School of Information and Communication, Guilin University of Electronic Technology, Guilin
关键词
Cyclic generative adversarial network; Focus loss; Label smoothing loss regularization; Person re-identification;
D O I
10.13190/j.jbupt.2020-147
中图分类号
学科分类号
摘要
The training set of the existing person re-identification model comes from limited fixed collection equipment, and the sample style lacks diversity. Through the cyclic generative adversarial network, the image data captured by different cameras can be styled and style transferred, which can improve the diversity of sample styles at a lower cost. In order to improve the generalization ability of the model, a new training mechanism of positive and negative samples fusion is designed. Firstly, the samples after the style transfer are regarded as negative samples, and the samples before the style transfer are regarded as positive samples. The positive and negative samples are sent to the model training at the same time. Furthermore, in order to prevent over fitting and consider the loss of false labels positions, label smoothing regularization is adopted. At the same time, in order to pay more attention to difficult and error-prone samples, and to optimize the loss of negative samples, a focal loss function is adopted. Experiments show that there is a significant increase of 1.51% and 2.07% on the Market-1501 and DukeMTMC-reID datasets, respectively. © 2021, Editorial Department of Journal of Beijing University of Posts and Telecommunications. All right reserved.
引用
收藏
页码:67 / 72
页数:5
相关论文
共 17 条
  • [1] Zheng Liang, Yang Yi, Alexander G H, Et al., Person re-identification: past, present and future, Latex Class Files, 14, 18, pp. 1-8, (2015)
  • [2] Zhang Li, Xiang Tao, Gong Shaogang, Et al., Learning a discriminative null space for person re-identification, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1239-1248, (2016)
  • [3] Liao Shengcai, Hu Yang, Zhu Xiangyu, Et al., Person re-identification by local maximal occurrence representation and metric learning, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2197-2206, (2015)
  • [4] Sun Yifan, Zheng Liang, Deng Weijin, Et al., SVDNet for pedestrian retrieval, IEEE International Conference on Computer Vision, pp. 3820-3828, (2017)
  • [5] Yuan Ye, Chen Wuyang, Yang Yang, Et al., In defense of the triplet loss again: learning robust person re-identification with fast approximated triplet loss and label distillation, Computer Vision and Pattern Recognition Workshops, pp. 1454-1463, (2020)
  • [6] Zhong Zhun, Zheng Liang, Zheng Zhedong, Et al., Camera style adaptation for person re-identification, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 5157-5166, (2018)
  • [7] Shen Yantao, Li Hongsheng, Xiao Tong, Et al., Deep group-shuffling random walk for person re-identification, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2265-2274, (2018)
  • [8] Luo Hao, Gu Youzhi, Liao Xingyu, Et al., Bag of tricks and a strong baseline for deep person re-identification, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1487-1495, (2019)
  • [9] Zhu Junyan, Park T, Isola P, Et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, IEEE International Conference on Computer Vision, pp. 2242-2251, (2017)
  • [10] Deng Weijian, Zheng Liang, Ye Qixiang, Et al., Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 994-1003, (2018)