Augmented Dual-Contrastive Aggregation Learning for Unsupervised Visible-Infrared Person Re-Identification

被引:34
|
作者
Yang, Bin [1 ]
Ye, Mang [1 ]
Chen, Jun [1 ]
Wu, Zesen [1 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Wuhan, Peoples R China
关键词
person re-identification; unsupervised learning; visible-infrared; cross-modality;
D O I
10.1145/3503161.3548198
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Visible infrared person re-identification (VI-ReID) aims at searching out the corresponding infrared (visible) images from a gallery set captured by other spectrum cameras. Recent works mainly focus on supervised VI-ReID methods that require plenty of cross-modality (visible-infrared) identity labels which are more expensive than the annotations in single-modality person ReID. For the unsupervised learning visible infrared re-identification (USL-VI-ReID), the large cross-modality discrepancies lead to difficulties in generating reliable cross-modality labels and learning modality-invariant features without any annotations. To address this problem, we propose a novel Augmented Dual-Contrastive Aggregation (ADCA) learning framework. Specifically, a dual-path contrastive learning framework with two modality-specific memories is proposed to learn the intra-modality person representation. To associate positive cross-modality identities, we design a cross-modality memory aggregation module with count priority to select highly associated positive samples, and aggregate their corresponding memory features at the cluster level, ensuring that the optimization is explicitly concentrated on the modality-irrelevant perspective. Extensive experiments demonstrate that our proposed ADCA significantly outperforms existing unsupervised methods under various settings, and even surpasses some supervised counterparts, facilitating VI-ReID to real-world deployment. Code is available at https://github.com/yangbincv/ADCA.
引用
收藏
页码:2843 / 2851
页数:9
相关论文
共 50 条
  • [21] Multi-memory Matching for Unsupervised Visible-Infrared Person Re-identification
    Shi, Jiangming
    Shi, Xiangbo
    Chen, Yeyun
    Zhang, Yachao
    Zhang, Zhizhong
    Xie, Yuan
    Qu, Yanyun
    COMPUTER VISION-ECCV 2024, PT XVIII, 2025, 15076 : 456 - 474
  • [22] Multi-Scale Contrastive Learning with Hierarchical Knowledge Synergy for Visible-Infrared Person Re-Identification
    Qian, Yongheng
    Tang, Su-Kit
    SENSORS, 2025, 25 (01)
  • [23] Hybrid Modality Metric Learning for Visible-Infrared Person Re-Identification
    Zhang, La
    Guo, Haiyun
    Zhu, Kuan
    Qiao, Honglin
    Huang, Gaopan
    Zhang, Sen
    Zhang, Huichen
    Sun, Jian
    Wang, Jinqiao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (01)
  • [24] Beyond a strong baseline: cross-modality contrastive learning for visible-infrared person re-identification
    Pengfei Fang
    Yukang Zhang
    Zhenzhong Lan
    Machine Vision and Applications, 2023, 34
  • [25] Beyond a strong baseline: cross-modality contrastive learning for visible-infrared person re-identification
    Fang, Pengfei
    Zhang, Yukang
    Lan, Zhenzhong
    MACHINE VISION AND APPLICATIONS, 2023, 34 (06)
  • [26] Dual-level contrastive learning for unsupervised person re-identification
    Zhao, Yu
    Shu, Qiaoyuan
    Shi, Xi
    IMAGE AND VISION COMPUTING, 2023, 129
  • [27] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Hao Wang
    Xiaojun Bi
    Changdong Yu
    Neural Processing Letters, 56
  • [28] Stronger Heterogeneous Feature Learning for Visible-Infrared Person Re-Identification
    Wang, Hao
    Bi, Xiaojun
    Yu, Changdong
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [29] Implicit Discriminative Knowledge Learning for Visible-Infrared Person Re-Identification
    Ren, Kaijie
    Zhang, Lei
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 393 - 402
  • [30] Visible-Infrared Person Re-Identification Via Feature Constrained Learning
    Zhang Jing
    Chen Guangfeng
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (12)