Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification

被引:119
|
作者
Yang, Mouxing [1 ]
Huang, Zhenyu [1 ]
Hu, Peng [1 ]
Li, Taihao [2 ]
Lv, Jiancheng [1 ]
Peng, Xi [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu, Peoples R China
[2] Zhejiang Lab, Hangzhou, Peoples R China
关键词
D O I
10.1109/CVPR52688.2022.01391
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study an untouched problem in visible-infrared person re-identification (VI-ReID), namely, Twin Noise Labels (TNL) which refers to as noisy annotation and correspondence. In brief, on the one hand, it is inevitable to annotate some persons with the wrong identity due to the complexity in data collection and annotation, e.g., the poor recognizability in the infrared modality. On the other hand, the wrongly annotated data in a single modality will eventually contaminate the cross-modal correspondence, thus leading to noisy correspondence. To solve the TNL problem, we propose a novel method for robust VI-ReID, termed DuAlly Robust Training (DART). In brief, DART first computes the clean confidence of annotations by resorting to the memorization effect of deep neural networks. Then, the proposed method rectifies the noisy correspondence with the estimated confidence and further divides the data into four groups for further utilizations. Finally, DART employs a novel dually robust loss consisting of a soft identification loss and an adaptive quadruplet loss to achieve robustness on the noisy annotation and noisy correspondence. Extensive experiments on SYSU-MM01 and RegDB datasets verify the effectiveness of our method against the twin noisy labels compared with five state-of-the-art methods.
引用
收藏
页码:14288 / 14297
页数:10
相关论文
共 50 条
  • [41] Dual Consistency-Constrained Learning for Unsupervised Visible-Infrared Person Re-Identification
    Yang, Bin
    Chen, Jun
    Chen, Cuiqun
    Ye, Mang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1767 - 1779
  • [42] Dual-attentive cascade clustering learning for visible-infrared person re-identification
    Xianju Wang
    Cuiqun Chen
    Yong Zhu
    Shuguang Chen
    Multimedia Tools and Applications, 2024, 83 : 19729 - 19746
  • [43] On exploring pose estimation as an auxiliary learning task for Visible-Infrared Person Re-identification
    Miao, Yunqi
    Huang, Nianchang
    Ma, Xiao
    Zhang, Qiang
    Han, Jungong
    NEUROCOMPUTING, 2023, 556
  • [44] Shallow-Deep Collaborative Learning for Unsupervised Visible-Infrared Person Re-Identification
    Yang, Bin
    Chen, Jun
    Ye, Mang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 16870 - 16879
  • [45] On learning distribution alignment for video-based visible-infrared person re-identification
    Fang, Pengfei
    Hu, Yaojun
    Zhu, Shipeng
    Xue, Hui
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 237
  • [46] Image-text feature learning for unsupervised visible-infrared person re-identification
    Guo, Jifeng
    Pang, Zhiqi
    IMAGE AND VISION COMPUTING, 2025, 158
  • [47] SDL: Spectrum-Disentangled Representation Learning for Visible-Infrared Person Re-Identification
    Kansal, Kajal
    Subramanyam, A. V.
    Wang, Zheng
    Satoh, Shin'ichi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (10) : 3422 - 3432
  • [48] Visible-infrared person re-identification using query related cluster
    Zhao Q.
    Wu H.
    Huang L.
    Zhu J.
    Zeng H.
    High Technology Letters, 2023, 29 (02) : 194 - 205
  • [49] Grayscale Enhancement Colorization Network for Visible-Infrared Person Re-Identification
    Zhong, Xian
    Lu, Tianyou
    Huang, Wenxin
    Ye, Mang
    Jia, Xuemei
    Lin, Chia-Wen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1418 - 1430
  • [50] Visible-Infrared Person Re-Identification: A Comprehensive Survey and a New Setting
    Zheng, Huantao
    Zhong, Xian
    Huang, Wenxin
    Jiang, Kui
    Liu, Wenxuan
    Wang, Zheng
    ELECTRONICS, 2022, 11 (03)