Dual-Path Imbalanced Feature Compensation Network for Visible-Infrared Person Re-Identification

被引:0
|
作者
Cheng, Xu [1 ]
Wang, Zichun [1 ]
Jiang, Yan [1 ]
Liu, Xingyu [1 ]
Yu, Hao [1 ]
Shi, Jingang [2 ]
Yu, Zitong [3 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Software, Xian, Peoples R China
[3] Great Bay Univ, Sch Comp & Informat Technol, Dongguan, Peoples R China
基金
中国国家自然科学基金;
关键词
Visible-infrared person re-identification; Modality imbalance; Feature re-assignment; bidirectional heterogeneous compensation;
D O I
10.1145/3700135
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible-infrared person re-identification (VI-ReID) presents significant challenges on account of the substantial cross-modality gap and intra-class variations. Most existing methods primarily concentrate on aligning cross-modality at the feature or image levels and training with an equal number of samples from different modalities. However, in the real world, there exists an issue of modality imbalance between visible and infrared data. Besides, imbalanced samples between train and test impact the robustness and generalization of the VI-ReID. To alleviate this problem, we propose a dual-path imbalanced feature compensation network (DICNet) for VI-ReID, which provides equal opportunities for each modality to learn inconsistent information from different identities of others, enhancing identity discrimination performance and generalization. First, a modality consistency perception (MCP) module is designed to assist the backbone focus on spatial and channel information, extracting diverse and salient features to enhance feature representation. Second, we propose a cross-modality features re-assignment strategy to simulate modality imbalance by grouping and re-organizing the cross-modality features. Third, we perform bidirectional heterogeneous cooperative compensation with cross-modality imbalanced feature interaction modules (CIFIMs), allowing our network to explore the identity-aware patterns from imbalanced features of multiple groups for cross-modality interaction and fusion. Further, we design a feature re-construction difference loss to reduce cross-modality discrepancy and enrich feature diversity within each modality. Extensive experiments on three mainstream datasets show the superiority of the DICNet. Additionally, competitive results in corrupted scenarios verify its generalization and robustness.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Dual-Semantic Consistency Learning for Visible-Infrared Person Re-Identification
    Zhang, Yiyuan
    Kang, Yuhao
    Zhao, Sanyuan
    Shen, Jianbing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1554 - 1565
  • [42] BiFFN: Bi-Frequency Guided Feature Fusion Network for Visible-Infrared Person Re-Identification
    Cao, Xingyu
    Ding, Pengxin
    Li, Jie
    Chen, Mei
    SENSORS, 2025, 25 (05)
  • [43] HCFN: Hierarchical cross-modal shared feature network for visible-infrared person re-identification?
    Li, Yueying
    Zhang, Huaxiang
    Liu, Li
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 89
  • [44] Visible-Infrared Person Re-Identification With Modality-Specific Memory Network
    Li, Yulin
    Zhang, Tianzhu
    Liu, Xiang
    Tian, Qi
    Zhang, Yongdong
    Wu, Feng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 7165 - 7178
  • [45] MGFNet: A Multi-granularity Feature Fusion and Mining Network for Visible-Infrared Person Re-identification
    Xu, BaiSheng
    Ye, HaoHui
    Wu, Wei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 15 - 28
  • [46] Modality-perceptive harmonization network for visible-infrared person re-identification
    Zuo, Xutao
    Peng, Jinjia
    Cheng, Tianhang
    Wang, Huibing
    INFORMATION FUSION, 2025, 118
  • [47] Mask-guided dual attention-aware network for visible-infrared person re-identification
    Qi, Meibin
    Wang, Suzhi
    Huang, Guanghong
    Jiang, Jianguo
    Wu, Jingjing
    Chen, Cuiqun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (12) : 17645 - 17666
  • [48] Diverse-Feature Collaborative Progressive Learning for Visible-Infrared Person Re-Identification
    Chan, Sixian
    Meng, Weihao
    Bai, Cong
    Hu, Jie
    Chen, Shenyong
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (05) : 7754 - 7763
  • [49] Mask-guided dual attention-aware network for visible-infrared person re-identification
    Meibin Qi
    Suzhi Wang
    Guanghong Huang
    Jianguo Jiang
    Jingjing Wu
    Cuiqun Chen
    Multimedia Tools and Applications, 2021, 80 : 17645 - 17666
  • [50] Visible-infrared person re-identification with complementary feature fusion and identity consistency learning
    Wang, Yiming
    Chen, Xiaolong
    Chai, Yi
    Xu, Kaixiong
    Jiang, Yutao
    Liu, Bowen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (01) : 703 - 719