Dual Consistency-Constrained Learning for Unsupervised Visible-Infrared Person Re-Identification

被引:4
|
作者
Yang, Bin [1 ]
Chen, Jun [1 ]
Chen, Cuiqun [1 ,2 ]
Ye, Mang [1 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Hubei Luojia Lab, Wuhan, 430072, Peoples R China
[2] Wuhan Text Univ, Engn Res Ctr Hubei Prov Clothing Informat, Wuhan 430079, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Cameras; Feature extraction; Data mining; Surveillance; Annotations; Task analysis; Person re-identification; visible-infrared; unsupervised learning; cross-modality; LABEL;
D O I
10.1109/TIFS.2023.3341392
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Unsupervised visible-infrared person re-identification (US-VI-ReID) aims at learning a cross-modality matching model under unsupervised conditions, which is an extremely important task for practical nighttime surveillance to retrieve a specific identity. Previous advanced US-VI-ReID works mainly focus on associating the positive cross-modality identities to optimize the feature extractor by off-line manners, inevitably resulting in error accumulation of incorrect off-line cross-modality associations in each training epoch due to the intra-modality and inter-modality discrepancies. They ignore the direct cross-modality feature interaction in the training process, i.e., the on-line representation learning and updating. Worse still, existing interaction methods are also susceptible to inter-modality differences, leading to unreliable heterogeneous neighborhood learning. To address the above issues, we propose a dual consistency-constrained learning framework (DCCL) simultaneously incorporating off-line cross-modality label refinement and on-line feature interaction learning. The basic idea is that the relations between cross-modality instance-instance and instance-identity should be consistent. More specifically, DCCL constructs an instance memory, an identity memory, and a domain memory for each modality. At the beginning of each training epoch, DCCL explores the off-line consistency of cross-modality instance-instance and instance-identity similarities to refine the reliable cross-modality identities. During the training, DCCL finds credible homogeneous and heterogeneous neighborhoods with on-line consistency between query-instance similarity and query-instance domain probability similarities for feature interaction in one batch, enhancing the robustness against intra-modality and inter-modality variations. Extensive experiments validate that our method significantly outperforms existing works, and even surpasses some supervised counterparts. The source code is available at https://github.com/yangbincv/DCCL.
引用
收藏
页码:1767 / 1779
页数:13
相关论文
共 50 条
  • [21] Hybrid Modality Metric Learning for Visible-Infrared Person Re-Identification
    Zhang, La
    Guo, Haiyun
    Zhu, Kuan
    Qiao, Honglin
    Huang, Gaopan
    Zhang, Sen
    Zhang, Huichen
    Sun, Jian
    Wang, Jinqiao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (01)
  • [22] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Hao Wang
    Xiaojun Bi
    Changdong Yu
    Neural Processing Letters, 56
  • [23] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Wang, Hao
    Bi, Xiaojun
    Yu, Changdong
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [24] Implicit Discriminative Knowledge Learning for Visible-Infrared Person Re-Identification
    Ren, Kaijie
    Zhang, Lei
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 393 - 402
  • [25] Contrastive Learning with Information Compensation for Visible-Infrared Person Re-Identification
    Zhang, La
    Guo, Haiyun
    Zhao, Xu
    Sun, Jian
    Wang, Jinqiao
    2024 14TH ASIAN CONTROL CONFERENCE, ASCC 2024, 2024, : 1266 - 1271
  • [26] Progressive Discriminative Feature Learning for Visible-Infrared Person Re-Identification
    Zhou, Feng
    Cheng, Zhuxuan
    Yang, Haitao
    Song, Yifeng
    Fu, Shengpeng
    ELECTRONICS, 2024, 13 (14)
  • [27] Fine-grained Learning for Visible-Infrared Person Re-identification
    Qi, Mengzan
    Chan, Sixian
    Hang, Chen
    Zhang, Guixu
    Li, Zhi
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2417 - 2422
  • [28] Learning with Twin Noisy Labels for Visible-Infrared Person Re-Identification
    Yang, Mouxing
    Huang, Zhenyu
    Hu, Peng
    Li, Taihao
    Lv, Jiancheng
    Peng, Xi
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14288 - 14297
  • [29] Attributes Based Visible-Infrared Person Re-identification
    Zheng, Aihua
    Feng, Mengya
    Pan, Peng
    Jiang, Bo
    Luo, Bin
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 254 - 266
  • [30] Interaction and Alignment for Visible-Infrared Person Re-Identification
    Gong, Jiahao
    Zhao, Sanyuan
    Lam, Kin-Man
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2253 - 2259