Dual-Path Imbalanced Feature Compensation Network for Visible-Infrared Person Re-Identification

被引:0
|
作者
Cheng, Xu [1 ]
Wang, Zichun [1 ]
Jiang, Yan [1 ]
Liu, Xingyu [1 ]
Yu, Hao [1 ]
Shi, Jingang [2 ]
Yu, Zitong [3 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp Sci, Nanjing, Peoples R China
[2] Xi An Jiao Tong Univ, Sch Software, Xian, Peoples R China
[3] Great Bay Univ, Sch Comp & Informat Technol, Dongguan, Peoples R China
基金
中国国家自然科学基金;
关键词
Visible-infrared person re-identification; Modality imbalance; Feature re-assignment; bidirectional heterogeneous compensation;
D O I
10.1145/3700135
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible-infrared person re-identification (VI-ReID) presents significant challenges on account of the substantial cross-modality gap and intra-class variations. Most existing methods primarily concentrate on aligning cross-modality at the feature or image levels and training with an equal number of samples from different modalities. However, in the real world, there exists an issue of modality imbalance between visible and infrared data. Besides, imbalanced samples between train and test impact the robustness and generalization of the VI-ReID. To alleviate this problem, we propose a dual-path imbalanced feature compensation network (DICNet) for VI-ReID, which provides equal opportunities for each modality to learn inconsistent information from different identities of others, enhancing identity discrimination performance and generalization. First, a modality consistency perception (MCP) module is designed to assist the backbone focus on spatial and channel information, extracting diverse and salient features to enhance feature representation. Second, we propose a cross-modality features re-assignment strategy to simulate modality imbalance by grouping and re-organizing the cross-modality features. Third, we perform bidirectional heterogeneous cooperative compensation with cross-modality imbalanced feature interaction modules (CIFIMs), allowing our network to explore the identity-aware patterns from imbalanced features of multiple groups for cross-modality interaction and fusion. Further, we design a feature re-construction difference loss to reduce cross-modality discrepancy and enrich feature diversity within each modality. Extensive experiments on three mainstream datasets show the superiority of the DICNet. Additionally, competitive results in corrupted scenarios verify its generalization and robustness.
引用
收藏
页数:24
相关论文
共 50 条
  • [1] Dual-path image pair joint discrimination for visible-infrared person re-identification
    Wang, Zhongjie
    Liu, Li
    Zhang, Huaxiang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 85
  • [2] Dual-Path Part-Level Method for Visible-Infrared Person Re-identification
    Xiang, Xuezhi
    Lv, Ning
    Zhai, Mingliang
    Abdeen, Rokia
    El Saddik, Abdulmotaleb
    NEURAL PROCESSING LETTERS, 2020, 52 (01) : 313 - 328
  • [3] Dual-Path Deep Supervision Network with Self-Attention for Visible-Infrared Person Re-Identification
    Cheng, Yunzhou
    Li, Xinyi
    Xiao, Guoqiang
    Ma, Wenzhuo
    Gou, Xinye
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [4] Feature-Level Compensation and Alignment for Visible-Infrared Person Re-Identification
    Dong, Husheng
    Lu, Ping
    Yang, Yuanfeng
    Sun, Xun
    IET COMPUTER VISION, 2025, 19 (01)
  • [5] Dual-granularity feature fusion in visible-infrared person re-identification
    Cai, Shuang
    Yang, Shanmin
    Hu, Jing
    Wu, Xi
    IET IMAGE PROCESSING, 2024, 18 (04) : 972 - 980
  • [6] Learning dual attention enhancement feature for visible-infrared person re-identification
    Zhang, Guoqing
    Zhang, Yinyin
    Zhang, Hongwei
    Chen, Yuhao
    Zheng, Yuhui
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 99
  • [7] A triple-path global–local feature complementary network for visible-infrared person re-identification
    Jiangtao Guo
    Yanfang Ye
    Haishun Du
    Xinxin Hao
    Signal, Image and Video Processing, 2024, 18 : 911 - 921
  • [8] FMCNet: Feature-Level Modality Compensation for Visible-Infrared Person Re-Identification
    Zhang, Qiang
    Lai, Changzhou
    Liu, Jianan
    Huang, Nianchang
    Han, Jungong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7339 - 7348
  • [9] Revisiting Modality-Specific Feature Compensation for Visible-Infrared Person Re-Identification
    Liu, Jianan
    Wang, Jialiang
    Huang, Nianchang
    Zhang, Qiang
    Han, Jungong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 7226 - 7240
  • [10] Identity Feature Disentanglement for Visible-Infrared Person Re-Identification
    Chen, Xiumei
    Zheng, Xiangtao
    Lu, Xiaoqiang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)