Person Re-Identification With Triplet Focal Loss

被引:28
|
作者
Zhang, Shizhou [1 ]
Zhang, Qi [1 ]
Wei, Xing [2 ]
Zhang, Yanning [1 ]
Xia, Yong [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci & Engn, Natl Engn Lab Integrated Aerosp Ground Ocean Big, Xian 710072, Shaanxi, Peoples R China
[2] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Shaanxi, Peoples R China
来源
IEEE ACCESS | 2018年 / 6卷
基金
美国国家科学基金会; 中国博士后科学基金;
关键词
Re-identification; triplet focal loss; hard example mining; DEEP; SET;
D O I
10.1109/ACCESS.2018.2884743
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Person re-identification (ReID), which aims at matching individuals across non-overlapping cameras, has attracted much attention in the field of computer vision due to its research significance and potential applications. Triplet loss-based CNN models have been very successful for person ReID, which aims to optimize the feature embedding space such that the distances between samples with the same identity are much shorter than those of samples with different identities. Researchers have found that hard triplets' mining is crucial for the success of the triplet loss. In this paper, motivated by focal loss designed for the classification model, we propose the triplet focal loss for person ReID. Triplet focal loss can up-weight the hard triplets' training samples and relatively down-weight the easy triplets adaptively via simply projecting the original distance in the Euclidean space to an exponential kernel space. We conduct experiments on three largest benchmark datasets currently available for person ReID, namely, Market-1501, DukeMTMC-ReID, and CUHK03, and the experimental results verify that the proposed triplet focal loss can greatly outperform the traditional triplet loss and achieve competitive performances with the representative state-of-the-art methods.
引用
收藏
页码:78092 / 78099
页数:8
相关论文
共 50 条
  • [31] Equidistant distribution loss for person re-identification
    Yang, Zhao
    Liu, Jiehao
    Liu, Tie
    Zhu, Yuanxin
    Wang, Li
    Tao, Dapeng
    NEUROCOMPUTING, 2021, 455 : 255 - 264
  • [32] A feature enhancement loss for person re-identification
    Peng, Yao
    Lin, Yining
    Ni, Huajian
    Gao, Hua
    Hu, Chenchen
    SYSTEMS SCIENCE & CONTROL ENGINEERING, 2023, 11 (01)
  • [33] Full Batch Loss for Person Re-identification
    Chen, Bing
    Zha, YuFei
    Wu, Min
    Li, YunQiang
    Hou, Zhiqiang
    TENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2018), 2019, 11069
  • [34] Person Re-identification with Joint-loss
    Liu, Junqi
    Jiang, Na
    Zhou, Zhong
    Xu, Yue
    2017 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2017), 2017, : 1 - 6
  • [35] Person re-identification by the symmetric triplet and identification loss function (vol 77, pg 3533, 2018)
    Cheng, De
    Gong, Yihong
    Shi, Weiwei
    Zhang, Shizhou
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (03) : 3551 - +
  • [36] Person Re-Identification with Improved Performance by Incorporating Focal Tversky Loss in AGW Baseline
    Huang, Shao-Kang
    Hsu, Chen-Chien
    Wang, Wei-Yen
    SENSORS, 2022, 22 (24)
  • [37] Multilevel triplet deep learning model for person re-identification
    Zhao, Cairong
    Chen, Kang
    Wei, Zhihua
    Chen, Yipeng
    Miao, Duoqian
    Wang, Wei
    PATTERN RECOGNITION LETTERS, 2019, 117 : 161 - 168
  • [38] A Mask-Pooling Model With Local-Level Triplet Loss for Person Re-Identification
    Zheng, Fudan
    Cai, Tingting
    Wang, Ying
    Deng, Chufu
    Chen, Zhiguang
    Zhu, Huiling
    IEEE ACCESS, 2020, 8 : 138191 - 138202
  • [39] Beyond Triplet Loss: Person Re-Identification With Fine-Grained Difference-Aware Pairwise Loss
    Yan, Cheng
    Pang, Guansong
    Bai, Xiao
    Liu, Changhong
    Ning, Xin
    Gu, Lin
    Zhou, Jun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1665 - 1677
  • [40] Rank-in-Rank Loss for Person Re-identification
    Xu, Xin
    Yuan, Xin
    Wang, Zheng
    Zhang, Kai
    Hu, Ruimin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)