Dual Consistency-Constrained Learning for Unsupervised Visible-Infrared Person Re-Identification

被引:4
|
作者
Yang, Bin [1 ]
Chen, Jun [1 ]
Chen, Cuiqun [1 ,2 ]
Ye, Mang [1 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Hubei Luojia Lab, Wuhan, 430072, Peoples R China
[2] Wuhan Text Univ, Engn Res Ctr Hubei Prov Clothing Informat, Wuhan 430079, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Cameras; Feature extraction; Data mining; Surveillance; Annotations; Task analysis; Person re-identification; visible-infrared; unsupervised learning; cross-modality; LABEL;
D O I
10.1109/TIFS.2023.3341392
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Unsupervised visible-infrared person re-identification (US-VI-ReID) aims at learning a cross-modality matching model under unsupervised conditions, which is an extremely important task for practical nighttime surveillance to retrieve a specific identity. Previous advanced US-VI-ReID works mainly focus on associating the positive cross-modality identities to optimize the feature extractor by off-line manners, inevitably resulting in error accumulation of incorrect off-line cross-modality associations in each training epoch due to the intra-modality and inter-modality discrepancies. They ignore the direct cross-modality feature interaction in the training process, i.e., the on-line representation learning and updating. Worse still, existing interaction methods are also susceptible to inter-modality differences, leading to unreliable heterogeneous neighborhood learning. To address the above issues, we propose a dual consistency-constrained learning framework (DCCL) simultaneously incorporating off-line cross-modality label refinement and on-line feature interaction learning. The basic idea is that the relations between cross-modality instance-instance and instance-identity should be consistent. More specifically, DCCL constructs an instance memory, an identity memory, and a domain memory for each modality. At the beginning of each training epoch, DCCL explores the off-line consistency of cross-modality instance-instance and instance-identity similarities to refine the reliable cross-modality identities. During the training, DCCL finds credible homogeneous and heterogeneous neighborhoods with on-line consistency between query-instance similarity and query-instance domain probability similarities for feature interaction in one batch, enhancing the robustness against intra-modality and inter-modality variations. Extensive experiments validate that our method significantly outperforms existing works, and even surpasses some supervised counterparts. The source code is available at https://github.com/yangbincv/DCCL.
引用
收藏
页码:1767 / 1779
页数:13
相关论文
共 50 条
  • [41] Modality-agnostic learning for robust visible-infrared person re-identification
    Gong, Shengrong
    Li, Shuomin
    Xie, Gengsheng
    Yao, Yufeng
    Zhong, Shan
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (03)
  • [42] Learning Modality-Specific Representations for Visible-Infrared Person Re-Identification
    Feng, Zhanxiang
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 579 - 590
  • [43] Shape-Erased Feature Learning for Visible-Infrared Person Re-Identification
    Feng, Jiawei
    Wu, Ancong
    Zhen, Wei-Shi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22752 - 22761
  • [44] Unbiased Feature Learning with Causal Intervention for Visible-Infrared Person Re-Identification
    Yuan, Bo wen
    Lu, Jiahao
    You, Sisi
    Bao, Bing-kun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (10)
  • [45] Multi-Stage Auxiliary Learning for Visible-Infrared Person Re-Identification
    Zhang, Huadong
    Cheng, Shuli
    Du, Anyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 12032 - 12047
  • [46] Text-augmented Multi-Modality contrastive learning for unsupervised visible-infrared person re-identification
    Sun, Rui
    Huang, Guoxi
    Wang, Xuebin
    Du, Yun
    Zhang, Xudong
    IMAGE AND VISION COMPUTING, 2024, 152
  • [47] Modality Unifying Network for Visible-Infrared Person Re-Identification
    Yu, Hao
    Cheng, Xu
    Peng, Wei
    Liu, Weihao
    Zhao, Guoying
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11151 - 11161
  • [48] Learning a Robust Synthetic Modality with Dual-Level Alignment for Visible-Infrared Person Re-identification
    Wang, Zichun
    Cheng, Xu
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 289 - 303
  • [49] Progressive discrepancy elimination for visible-infrared person re-identification
    Zhang, Guoqing
    Wang, Zhun Zhun
    Wang, Hairui
    Zhou, Jieqiong
    Zheng, Yuhui
    NEUROCOMPUTING, 2024, 607
  • [50] Cross-Modality Hierarchical Clustering and Refinement for Unsupervised Visible-Infrared Person Re-Identification
    Pang, Zhiqi
    Wang, Chunyu
    Zhao, Lingling
    Liu, Yang
    Sharma, Gaurav
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (04) : 2706 - 2718