Counterfactual attention alignment for visible-infrared cross-modality person re-identification

被引:4
|
作者
Sun, Zongzhe [1 ]
Zhao, Feng [1 ]
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230027, Peoples R China
关键词
Person re-identification; Cross; -modal; Attention;
D O I
10.1016/j.patrec.2023.03.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visible-infrared person re-identification (VI-ReID) copes with cross-modality matching between the day-time visible and night-time infrared images. Existing methods try to use attention modules to enhance multi-modality feature representations, but ignore measures of attention quality and lack direct and ef-fective supervision of the attention learning process. To solve these problems, we propose a counter-factual attention alignment (CAA) strategy by mining intra-modality attention information with counter-factual causality and aligning the cross-modality attentions. Specifically, a self-weighted part attention module is designed to extract the pairwise attention information in local parts. The counterfactual at-tention alignment strategy obtains the learning results of the attention module through counterfactual intervention, and aligns the attention maps of the two modalities to find better shared cross-modality attention regions. Then the effect of the aligned attention on network prediction is used as a supervision signal to directly guide the attention module to learn more effective attention information. Extensive ex-perimental results demonstrate that the proposed approach outperforms other state-of-the-art methods on two standard benchmarks.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页码:79 / 85
页数:7
相关论文
共 50 条
  • [21] Dual adaptive alignment and partitioning network for visible and infrared cross-modality person re-identification
    Qiang Liu
    Qizhi Teng
    Honggang Chen
    Bo Li
    Linbo Qing
    Applied Intelligence, 2022, 52 : 547 - 563
  • [22] Visible Infrared Cross-Modality Person Re-Identification Network Based on Adaptive Pedestrian Alignment
    Li, Bo
    Wu, Xiaohong
    Liu, Qiang
    He, Xiaohai
    Yang, Fei
    IEEE ACCESS, 2019, 7 : 171485 - 171494
  • [23] Dual adaptive alignment and partitioning network for visible and infrared cross-modality person re-identification
    Liu, Qiang
    Teng, Qizhi
    Chen, Honggang
    Li, Bo
    Qing, Linbo
    APPLIED INTELLIGENCE, 2022, 52 (01) : 547 - 563
  • [24] Multi-granularity feature utilization network for cross-modality visible-infrared person re-identification
    Zhang, Guoqing
    Zhang, Yinyin
    Chen, Yuhao
    Zhang, Hongwei
    Zheng, Yuhui
    SOFT COMPUTING, 2023,
  • [25] CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification
    Fu, Chaoyou
    Hu, Yibo
    Wu, Xiang
    Shi, Hailin
    Mei, Tao
    He, Ran
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 11803 - 11812
  • [26] Interaction and Alignment for Visible-Infrared Person Re-Identification
    Gong, Jiahao
    Zhao, Sanyuan
    Lam, Kin-Man
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2253 - 2259
  • [27] Modality interactive attention for cross-modality person re-identification
    Zou, Zilin
    Chen, Ying
    IMAGE AND VISION COMPUTING, 2024, 148
  • [28] DMANet: Dual-modality alignment network for visible-infrared person re-identification
    Cheng, Xu
    Deng, Shuya
    Yu, Hao
    Zhao, Guoying
    PATTERN RECOGNITION, 2025, 157
  • [29] Modality Unifying Network for Visible-Infrared Person Re-Identification
    Yu, Hao
    Cheng, Xu
    Peng, Wei
    Liu, Weihao
    Zhao, Guoying
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11151 - 11161
  • [30] DMA: Dual Modality-Aware Alignment for Visible-Infrared Person Re-Identification
    Cui, Zhenyu
    Zhou, Jiahuan
    Peng, Yuxin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2696 - 2708