Infrared-visible person re-identification via Dual-Channel attention mechanism

被引:2
|
作者
Lv, Zhihan [1 ]
Zhu, Songhao [1 ]
Wang, Dongsheng [1 ]
Liang, Zhiwei [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Automation & Artificial Intelligence, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Person re-identification; Cross-modality; Attention mechanism; Dual-path;
D O I
10.1007/s11042-023-14486-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Infrared-Visible person re-identification (IV-ReID) is really a challenging task, which aims to match pedestrian images captured by visible and thermal cameras. There exists differences in appearance between visible and infrared images caused by viewpoint changes, pose variations and deformations, and additional cross-modality gap caused by different camera spectrums. These discrepancy make IV-ReID difficult to be addressed. In order to solve this problem, we propose a dual-path network with an attention mechanism called Convolutional Block Attention Module (CBAM) to learn the discriminative feature representations, and a modified Batch Norm Neck (BNNeck) module fuses the feature representation of cross-modality to improve the identity recognition accuracy. Specifically, the proposed method firstly constructs two independent networks to learn modality-specific feature representation, next the feature representation is split into several stripes by a conventional average pooling layer, then a shared layer is introduced to project feature representation from cross-modality into the same embedding space. Finally, we fuse heterogeneous loss function and cross entropy loss function to measure the feature similarity to improve the performance. The experimental results on two public cross-modality person re-identification datasets (SYSU-MM01 and RegDB) demonstrate that the proposed method can significantly improve the performance of IV-ReID.
引用
收藏
页码:22631 / 22649
页数:19
相关论文
共 50 条
  • [41] Attention-enhanced feature mapping network for visible-infrared person re-identification
    Shuaiyi Liu
    Ke Han
    Machine Vision and Applications, 2025, 36 (2)
  • [42] Visible-infrared person re-identification via specific and shared representations learning
    Aihua Zheng
    Juncong Liu
    Zi Wang
    Lili Huang
    Chenglong Li
    Bing Yin
    Visual Intelligence, 1 (1):
  • [43] Visible-Infrared Person Re-Identification via Semantic Alignment and Affinity Inference
    Fang, Xingye
    Yang, Yang
    Fu, Ying
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11236 - 11245
  • [44] A Domain Adaptive Person Re-Identification Based on Dual Attention Mechanism and Camstyle Transfer
    Zhong, Chengyan
    Qi, Guanqiu
    Mazur, Neal
    Banerjee, Sarbani
    Malaviya, Devanshi
    Hu, Gang
    ALGORITHMS, 2021, 14 (12)
  • [45] Cross-Modal Person Re-Identification Based on Channel Reorganization and Attention Mechanism
    Huo Dongdong
    Du Haishuns
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (14)
  • [46] Deep Network with Spatial and Channel Attention for Person Re-identification
    Guo, Tiansheng
    Wang, Dongfei
    Jiang, Zhuqing
    Men, Aidong
    Zhou, Yun
    2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [47] Dual semantic interdependencies attention network for person re-identification
    Yang, Shengrong
    Hu, Haifeng
    Chen, Dihu
    Su, Tao
    ELECTRONICS LETTERS, 2020, 56 (25) : 1411 - 1413
  • [48] Dual-Path Part-Level Method for Visible–Infrared Person Re-identification
    Xuezhi Xiang
    Ning Lv
    Mingliang Zhai
    Rokia Abdeen
    Abdulmotaleb El Saddik
    Neural Processing Letters, 2020, 52 : 313 - 328
  • [49] DMA: Dual Modality-Aware Alignment for Visible-Infrared Person Re-Identification
    Cui, Zhenyu
    Zhou, Jiahuan
    Peng, Yuxin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2696 - 2708
  • [50] Video-based Visible-Infrared Person Re-Identification via Style Disturbance Defense and Dual Interaction
    Zhou, Chuhao
    Li, Jinxing
    Li, Huafeng
    Lu, Guangming
    Xu, Yong
    Zhang, Min
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 46 - 55