Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification

被引:119
|
作者
Yang, Mouxing [1 ]
Huang, Zhenyu [1 ]
Hu, Peng [1 ]
Li, Taihao [2 ]
Lv, Jiancheng [1 ]
Peng, Xi [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu, Peoples R China
[2] Zhejiang Lab, Hangzhou, Peoples R China
关键词
D O I
10.1109/CVPR52688.2022.01391
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study an untouched problem in visible-infrared person re-identification (VI-ReID), namely, Twin Noise Labels (TNL) which refers to as noisy annotation and correspondence. In brief, on the one hand, it is inevitable to annotate some persons with the wrong identity due to the complexity in data collection and annotation, e.g., the poor recognizability in the infrared modality. On the other hand, the wrongly annotated data in a single modality will eventually contaminate the cross-modal correspondence, thus leading to noisy correspondence. To solve the TNL problem, we propose a novel method for robust VI-ReID, termed DuAlly Robust Training (DART). In brief, DART first computes the clean confidence of annotations by resorting to the memorization effect of deep neural networks. Then, the proposed method rectifies the noisy correspondence with the estimated confidence and further divides the data into four groups for further utilizations. Finally, DART employs a novel dually robust loss consisting of a soft identification loss and an adaptive quadruplet loss to achieve robustness on the noisy annotation and noisy correspondence. Extensive experiments on SYSU-MM01 and RegDB datasets verify the effectiveness of our method against the twin noisy labels compared with five state-of-the-art methods.
引用
收藏
页码:14288 / 14297
页数:10
相关论文
共 50 条
  • [1] Modality Blur and Batch Alignment Learning for Twin Noisy Labels-based Visible-infrared Person Re-identification
    Wu, Song
    Shan, Shihao
    Xiao, Guoqiang
    Lew, Michael S.
    Gao, Xinbo
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [2] Occluded Visible-Infrared Person Re-Identification
    Feng, Yujian
    Ji, Yimu
    Wu, Fei
    Gao, Guangwei
    Gao, Yang
    Liu, Tianliang
    Liu, Shangdong
    Jing, Xiao-Yuan
    Luo, Jiebo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1401 - 1413
  • [3] Hybrid Modality Metric Learning for Visible-Infrared Person Re-Identification
    Zhang, La
    Guo, Haiyun
    Zhu, Kuan
    Qiao, Honglin
    Huang, Gaopan
    Zhang, Sen
    Zhang, Huichen
    Sun, Jian
    Wang, Jinqiao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (01)
  • [4] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Hao Wang
    Xiaojun Bi
    Changdong Yu
    Neural Processing Letters, 56
  • [5] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Wang, Hao
    Bi, Xiaojun
    Yu, Changdong
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [6] Robust Duality Learning for Unsupervised Visible-Infrared Person Re-Identification
    Li, Yongxiang
    Sun, Yuan
    Qin, Yang
    Peng, Dezhong
    Peng, Xi
    Hu, Peng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1937 - 1948
  • [7] Implicit Discriminative Knowledge Learning for Visible-Infrared Person Re-Identification
    Ren, Kaijie
    Zhang, Lei
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 393 - 402
  • [8] Contrastive Learning with Information Compensation for Visible-Infrared Person Re-Identification
    Zhang, La
    Guo, Haiyun
    Zhao, Xu
    Sun, Jian
    Wang, Jinqiao
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1266 - 1271
  • [9] Visible-Infrared Person Re-Identification Via Feature Constrained Learning
    Zhang Jing
    Chen Guangfeng
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (12)
  • [10] Progressive Discriminative Feature Learning for Visible-Infrared Person Re-Identification
    Zhou, Feng
    Cheng, Zhuxuan
    Yang, Haitao
    Song, Yifeng
    Fu, Shengpeng
    ELECTRONICS, 2024, 13 (14)