Channel decoupling network for cross-modality person re-identification

被引:0
|
作者
Chen, Jingying [1 ]
Chen, Chang [1 ]
Tan, Lei [1 ]
Peng, Shixin [1 ]
机构
[1] Cent China Normal Univ, Natl Engn Res Ctr E Learning, Wuhan, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Person re-identification; Cross modality; Channel decoupling;
D O I
10.1007/s11042-022-13927-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-modality person re-identification (CM-ReID) is a very challenging problem due to the discrepancy in data distributions between visible and near-infrared modalities. To obtain a robust sharing feature representation, existing methods mainly focus on image generation or feature constrain to decrease the modality discrepancy, which ignores the large gap between mixed-spectral visible images and single-spectral near-infrared images. In this paper, we address the problem by decoupling the mixed-spectral visible images into three single-spectral subspaces R, G, and B. By aligning the spectrum, we noted that even using a single spectral image instead of the VIS images could result in a better performance. Based on the above observation, we further introduce a clear and effective three-path channel decoupling network (CDNet) for combining the three spectral images. Extensive experiments implemented on the benchmark CM-ReID datasets, SYSU-MM01 and RegDB indicated that our method achieved state-of-the-art performance and outperformed existing approaches by a large margin. On the RegDB dataset, the absolute gain of our method in terms of rank-1 and mAP is well over 15.4% and 8.5%, respectively, compared with the state-of-the-art methods.
引用
收藏
页码:14091 / 14105
页数:15
相关论文
共 50 条
  • [1] Channel decoupling network for cross-modality person re-identification
    Jingying Chen
    Chang Chen
    Lei Tan
    Shixin Peng
    [J]. Multimedia Tools and Applications, 2023, 82 : 14091 - 14105
  • [2] Self-attention Cross-modality Fusion Network for Cross-modality Person Re-identification
    Du P.
    Song Y.-H.
    Zhang X.-Y.
    [J]. Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (06): : 1457 - 1468
  • [3] Cross-modality person re-identification via channel-based partition network
    Jiachang Liu
    Wanru Song
    Changhong Chen
    Feng Liu
    [J]. Applied Intelligence, 2022, 52 : 2423 - 2435
  • [4] Cross-modality person re-identification via channel-based partition network
    Liu, Jiachang
    Song, Wanru
    Chen, Changhong
    Liu, Feng
    [J]. APPLIED INTELLIGENCE, 2022, 52 (03) : 2423 - 2435
  • [5] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    [J]. IMAGE AND VISION COMPUTING, 2024, 148
  • [6] Cross-modality person re-identification algorithm using symmetric network
    Zhang, Yan
    Xiang, Xu
    Tang, Jun
    Wang, Nian
    Qu, Lei
    [J]. Guofang Keji Daxue Xuebao/Journal of National University of Defense Technology, 2022, 44 (01): : 122 - 128
  • [7] Triplet interactive attention network for cross-modality person re-identification
    Zhang, Chenrui
    Chen, Ping
    Lei, Tao
    Meng, Hongying
    [J]. PATTERN RECOGNITION LETTERS, 2021, 152 : 202 - 209
  • [8] A Survey on Cross-Modality Heterogeneous Person Re-identification
    Sun R.
    Zhao Z.
    Yang Z.
    Gao J.
    [J]. Sun, Rui (sunrui@hfut.edu.cn), 1600, Science Press (33): : 1066 - 1082
  • [9] Cross-Modality Channel Mixup and Modality Decorrelation for RGB-Infrared Person Re-Identification
    Hua, Boyu
    Zhang, Junyin
    Li, Ziqiang
    Ge, Yongxin
    [J]. IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2023, 5 (04): : 512 - 523
  • [10] Hierarchical Feature Fusion for Cross-Modality Person Re-identification
    Fu, Wen
    Lim, Monghao
    [J]. International Journal of Pattern Recognition and Artificial Intelligence, 2024, 38 (16)