Dual Consistency-Constrained Learning for Unsupervised Visible-Infrared Person Re-Identification

被引:4
|
作者
Yang, Bin [1 ]
Chen, Jun [1 ]
Chen, Cuiqun [1 ,2 ]
Ye, Mang [1 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Hubei Luojia Lab, Wuhan, 430072, Peoples R China
[2] Wuhan Text Univ, Engn Res Ctr Hubei Prov Clothing Informat, Wuhan 430079, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Cameras; Feature extraction; Data mining; Surveillance; Annotations; Task analysis; Person re-identification; visible-infrared; unsupervised learning; cross-modality; LABEL;
D O I
10.1109/TIFS.2023.3341392
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Unsupervised visible-infrared person re-identification (US-VI-ReID) aims at learning a cross-modality matching model under unsupervised conditions, which is an extremely important task for practical nighttime surveillance to retrieve a specific identity. Previous advanced US-VI-ReID works mainly focus on associating the positive cross-modality identities to optimize the feature extractor by off-line manners, inevitably resulting in error accumulation of incorrect off-line cross-modality associations in each training epoch due to the intra-modality and inter-modality discrepancies. They ignore the direct cross-modality feature interaction in the training process, i.e., the on-line representation learning and updating. Worse still, existing interaction methods are also susceptible to inter-modality differences, leading to unreliable heterogeneous neighborhood learning. To address the above issues, we propose a dual consistency-constrained learning framework (DCCL) simultaneously incorporating off-line cross-modality label refinement and on-line feature interaction learning. The basic idea is that the relations between cross-modality instance-instance and instance-identity should be consistent. More specifically, DCCL constructs an instance memory, an identity memory, and a domain memory for each modality. At the beginning of each training epoch, DCCL explores the off-line consistency of cross-modality instance-instance and instance-identity similarities to refine the reliable cross-modality identities. During the training, DCCL finds credible homogeneous and heterogeneous neighborhoods with on-line consistency between query-instance similarity and query-instance domain probability similarities for feature interaction in one batch, enhancing the robustness against intra-modality and inter-modality variations. Extensive experiments validate that our method significantly outperforms existing works, and even surpasses some supervised counterparts. The source code is available at https://github.com/yangbincv/DCCL.
引用
收藏
页码:1767 / 1779
页数:13
相关论文
共 50 条
  • [31] Correlation-Guided Semantic Consistency Network for Visible-Infrared Person Re-Identification
    Li, Haojie
    Li, Mingxuan
    Peng, Qijie
    Wang, Shijie
    Yu, Hong
    Wang, Zhihui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4503 - 4515
  • [32] Modality-Shared Prototypes for Enhanced Unsupervised Visible-Infrared Person Re-Identification
    Chen, Xiaohan
    Wang, Suqing
    Zheng, Yujin
    PATTERN RECOGNITION AND COMPUTER VISION, PT XIII, PRCV 2024, 2025, 15043 : 237 - 250
  • [33] Dual-granularity feature fusion in visible-infrared person re-identification
    Cai, Shuang
    Yang, Shanmin
    Hu, Jing
    Wu, Xi
    IET IMAGE PROCESSING, 2024, 18 (04) : 972 - 980
  • [34] Visible-infrared person re-identification model based on feature consistency and modal indistinguishability
    Sun, Jia
    Li, Yanfeng
    Chen, Houjin
    Peng, Yahui
    Zhu, Jinlei
    MACHINE VISION AND APPLICATIONS, 2023, 34 (01)
  • [35] Visible-infrared person re-identification model based on feature consistency and modal indistinguishability
    Jia Sun
    Yanfeng Li
    Houjin Chen
    Yahui Peng
    Jinlei Zhu
    Machine Vision and Applications, 2023, 34
  • [36] Adaptive Middle Modality Alignment Learning for Visible-Infrared Person Re-identification
    Zhang, Yukang
    Yan, Yan
    Lu, Yang
    Wang, Hanzi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, : 2176 - 2196
  • [37] Multi-dimensional feature learning for visible-infrared person re-identification
    Yang, Zhenzhen
    Wu, Xinyi
    Yang, Yongpeng
    BIG DATA RESEARCH, 2025, 40
  • [38] Style-Agnostic Representation Learning for Visible-Infrared Person Re-Identification
    Wu, Jianbing
    Liu, Hong
    Shi, Wei
    Liu, Mengyuan
    Li, Wenhao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2263 - 2275
  • [39] Towards a Unified Middle Modality Learning for Visible-Infrared Person Re-Identification
    Zhang, Yukang
    Yan, Yan
    Lu, Yang
    Wang, Hanzi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 788 - 796
  • [40] Visible-infrared person re-identification via specific and shared representations learning
    Aihua Zheng
    Juncong Liu
    Zi Wang
    Lili Huang
    Chenglong Li
    Bing Yin
    Visual Intelligence, 1 (1):