BiFFN: Bi-Frequency Guided Feature Fusion Network for Visible-Infrared Person Re-Identification

被引:0
|
作者
Cao, Xingyu [1 ]
Ding, Pengxin [1 ]
Li, Jie [1 ]
Chen, Mei [1 ]
机构
[1] Chengdu Univ Informat Technol, Sch Comp Sci, Chengdu 610225, Peoples R China
基金
中国国家自然科学基金;
关键词
VI-ReID; frequency domain analysis; feature fusion; modality gap reduction;
D O I
10.3390/s25051298
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Visible-infrared person re-identification (VI-ReID) aims to minimize the modality gaps of pedestrian images across different modalities. Existing methods primarily focus on extracting cross-modality features from the spatial domain, which often limits the comprehensive extraction of useful information. Compared with conventional approaches that either focus on single-frequency components or employ simple multi-branch fusion strategies, our method fundamentally addresses the modality discrepancy through systematic frequency-space co-learning. To address this limitation, we propose a novel bi-frequency feature fusion network (BiFFN) that effectively extracts and fuses features from both high- and low-frequency domains and spatial domain features to reduce modality gaps. The network introduces a frequency-spatial enhancement (FSE) module to enhance feature representation across both domains. Additionally, the deep frequency mining (DFM) module optimizes cross-modality information utilization by leveraging distinct features of high- and low-frequency features. The cross-frequency fusion (CFF) module further aligns low-frequency features and fuses them with high-frequency features to generate middle features that incorporate critical information from each modality. To refine the distribution of identity features in the common space, we develop a unified modality center (UMC) loss, which promotes a more balanced inter-modality distribution while preserving discriminative identity information. Extensive experiments demonstrate that the proposed BiFFN achieves state-of-the-art performance in VI-ReID. Specifically, our method achieved a Rank-1 accuracy of 77.5% and an mAP of 75.9% on the SYSU-MM01 dataset under the all-search mode. Additionally, it achieved a Rank-1 accuracy of 58.5% and an mAP of 63.7% on the LLCM dataset under the IR-VIS mode. These improvements verify that our model, with the integration of feature fusion and the incorporation of frequency domains, significantly reduces modality gaps and outperforms previous methods.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] TWO-PHASE FEATURE FUSION NETWORK FOR VISIBLE-INFRARED PERSON RE-IDENTIFICATION
    Cheng, Yunzhou
    Xiao, Guoqiang
    Tang, Xiaoqin
    Ma, Wenzhuo
    Gou, Xinye
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1149 - 1153
  • [2] Feature Fusion and Center Aggregation for Visible-Infrared Person Re-Identification
    Wang, Xianju
    Chen, Cuiqun
    Zhu, Yong
    Chen, Shuguang
    IEEE ACCESS, 2022, 10 : 30949 - 30958
  • [3] Auxiliary Representation Guided Network for Visible-Infrared Person Re-Identification
    Qi, Mengzan
    Chan, Sixian
    Hang, Chen
    Zhang, Guixu
    Zeng, Tieyong
    Li, Zhi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 340 - 355
  • [4] Dual-granularity feature fusion in visible-infrared person re-identification
    Cai, Shuang
    Yang, Shanmin
    Hu, Jing
    Wu, Xi
    IET IMAGE PROCESSING, 2024, 18 (04) : 972 - 980
  • [5] Identity Feature Disentanglement for Visible-Infrared Person Re-Identification
    Chen, Xiumei
    Zheng, Xiangtao
    Lu, Xiaoqiang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (06)
  • [6] MGFNet: A Multi-granularity Feature Fusion and Mining Network for Visible-Infrared Person Re-identification
    Xu, BaiSheng
    Ye, HaoHui
    Wu, Wei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 15 - 28
  • [7] Visible-infrared person re-identification with complementary feature fusion and identity consistency learning
    Wang, Yiming
    Chen, Xiaolong
    Chai, Yi
    Xu, Kaixiong
    Jiang, Yutao
    Liu, Bowen
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (01) : 703 - 719
  • [8] Modality Unifying Network for Visible-Infrared Person Re-Identification
    Yu, Hao
    Cheng, Xu
    Peng, Wei
    Liu, Weihao
    Zhao, Guoying
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11151 - 11161
  • [9] Correlation-Guided Semantic Consistency Network for Visible-Infrared Person Re-Identification
    Li, Haojie
    Li, Mingxuan
    Peng, Qijie
    Wang, Shijie
    Yu, Hong
    Wang, Zhihui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4503 - 4515
  • [10] Attention-enhanced feature mapping network for visible-infrared person re-identification
    Liu, Shuaiyi
    Han, Ke
    MACHINE VISION AND APPLICATIONS, 2025, 36 (02)