Leaning compact and representative features for cross-modality person re-identification

被引:15
|
作者
Gao, Guangwei [1 ,2 ]
Shao, Hao [1 ]
Wu, Fei [1 ]
Yang, Meng [3 ]
Yu, Yi [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Inst Adv Technol, Nanjing, Peoples R China
[2] Natl Inst Informat, Digital Content & Media Sci Res Div, Tokyo, Japan
[3] Sun Yat Sen Univ, Key Lab Machine Intelligence & Adv Comp, Minist Educ, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; Cross-modality; Angular triplet loss; Knowledge distillation loss;
D O I
10.1007/s11280-022-01014-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper pays close attention to the cross-modality visible-infrared person re-identification (VI Re-ID) task, which aims to match pedestrian samples between visible and infrared modes. In order to reduce the modality-discrepancy between samples from different cameras, most existing works usually use constraints based on Euclidean metric. Because of the Euclidean based distance metric strategy cannot effectively measure the internal angles between the embedded vectors, the existing solutions cannot learn the angularly discriminative feature embedding. Since the most important factor affecting the classification task based on embedding vector is whether there is an angularly discriminative feature space, in this paper, we present a new loss function called Enumerate Angular Triplet (EAT) loss. Also, motivated by the knowledge distillation, to narrow down the features between different modalities before feature embedding, we further present a novel Cross-Modality Knowledge Distillation (CMKD) loss. Benefit from the above two considerations, the embedded features are discriminative enough in a way to tackle modality-discrepancy problem. The experimental results on RegDB and SYSU-MM01 datasets have demonstrated that the proposed method is superior to the other most advanced methods in terms of impressive performance. Code is available at https://github.com/IVIPLab/LCCRF.
引用
收藏
页码:1649 / 1666
页数:18
相关论文
共 50 条
  • [21] Efficient Shared Feature Learning for Cross-modality Person Re-identification
    Song, Wanru
    Wang, Xinyi
    Liu, Feng
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 100 - 105
  • [22] Unbiased feature enhancement framework for cross-modality person re-identification
    Bowen Yuan
    Bairu Chen
    Zhiyi Tan
    Xi Shao
    Bing-Kun Bao
    Multimedia Systems, 2022, 28 : 749 - 759
  • [23] HPILN: a feature learning framework for cross-modality person re-identification
    Zhao, Yun-Bo
    Lin, Jian-Wu
    Xuan, Qi
    Xi, Xugang
    IET IMAGE PROCESSING, 2019, 13 (14) : 2897 - 2904
  • [24] Cross-Modality Transformer for Visible-Infrared Person Re-Identification
    Jiang, Kongzhu
    Zhang, Tianzhu
    Liu, Xiang
    Qian, Bingqiao
    Zhang, Yongdong
    Wu, Feng
    COMPUTER VISION - ECCV 2022, PT XIV, 2022, 13674 : 480 - 496
  • [25] Co-segmentation assisted cross-modality person re-identification
    Huang, Nianchang
    Xing, Baichao
    Zhang, Qiang
    Han, Jungong
    Huang, Jin
    INFORMATION FUSION, 2024, 104
  • [26] Cross-modality person re-identification using hybrid mutual learning
    Zhang, Zhong
    Dong, Qing
    Wang, Sen
    Liu, Shuang
    Xiao, Baihua
    Durrani, Tariq S.
    IET COMPUTER VISION, 2023, 17 (01) : 1 - 12
  • [27] Triplet interactive attention network for cross-modality person re-identification
    Zhang, Chenrui
    Chen, Ping
    Lei, Tao
    Meng, Hongying
    PATTERN RECOGNITION LETTERS, 2021, 152 : 202 - 209
  • [28] Deep feature learning with attributes for cross-modality person re-identification
    Zhang, Shikun
    Chen, Changhong
    Song, Wanru
    Gan, Zongliang
    JOURNAL OF ELECTRONIC IMAGING, 2020, 29 (03)
  • [29] Cross-modality person re-identification via modality-synergy alignment learning
    Lin, Yuju
    Wang, Banghai
    MACHINE VISION AND APPLICATIONS, 2024, 35 (06)
  • [30] Cross-Modality Transformer With Modality Mining for Visible-Infrared Person Re-Identification
    Liang, Tengfei
    Jin, Yi
    Liu, Wu
    Li, Yidong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8432 - 8444