VAC-Net: Visual Attention Consistency Network for Person Re-identification

被引:1
|
作者
Shi, Weidong [1 ]
Zhang, Yunzhou [1 ]
Zhu, Shangdong [1 ]
Liu, Yixiu [1 ]
Coleman, Sonya [2 ]
Kerr, Dermot [2 ]
机构
[1] Northeastern Univ, Shenyang, Liaoning, Peoples R China
[2] Univ Ulster, York St, Belfast, Antrim, North Ireland
基金
中国国家自然科学基金;
关键词
Person re-identification; Viewpoint change; Scale variations; Visual attention;
D O I
10.1145/3512527.3531409
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Person re-identification (ReID) is a crucial aspect of recognising pedestrians across multiple surveillance cameras. Even though significant progress has been made in recent years, the viewpoint change and scale variations still affect model performance. In this paper, we observe that it is beneficial for the model to handle the above issues when boost the consistent feature extraction capability among different transforms (e.g., flipping and scaling) of the same image. To this end, we propose a visual attention consistency network (VAC-Net). Specifically, we propose Embedding Spatial Consistency (ESC) architecture with flipping, scaling and original forms of the same image as inputs to learn a consistent embedding space. Furthermore, we design an Input-Wise visual attention consistent loss (IW-loss) so that the class activation maps(CAMs) from the three transforms are aligned with each other to enforce their advanced semantic information remains consistent. Finally, we propose a Layer-Wise visual attention consistent loss (LW-loss) to further enforce the semantic information among different stages to be consistent with the CAMs within each branch. These two losses can effectively improve the model to address the viewpoint and scale variations. Experiments on the challenging Market-1501, DukeMTMC-reID, and MSMT17 datasets demonstrate the effectiveness of the proposed VAC-Net.
引用
收藏
页码:571 / 578
页数:8
相关论文
共 50 条
  • [21] Double-Resolution Attention Network for Person Re-Identification
    Hu Jiajie
    Li Chungeng
    An Jubai
    Huang Chao
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (20)
  • [22] Semantic guidance attention network for occluded person re-identification
    Ren X.
    Zhang D.
    Bao X.
    Li B.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (10): : 106 - 116
  • [23] Deep Network with Spatial and Channel Attention for Person Re-identification
    Guo, Tiansheng
    Wang, Dongfei
    Jiang, Zhuqing
    Men, Aidong
    Zhou, Yun
    2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [24] A part-based attention network for person re-identification
    Weilin Zhong
    Linfeng Jiang
    Tao Zhang
    Jinsheng Ji
    Huilin Xiong
    Multimedia Tools and Applications, 2020, 79 : 22525 - 22549
  • [25] Attention-Aware Adversarial Network for Person Re-Identification
    Shen, Aihong
    Wang, Huasheng
    Wang, Junjie
    Tan, Hongchen
    Liu, Xiuping
    Cao, Junjie
    APPLIED SCIENCES-BASEL, 2019, 9 (08):
  • [26] A part-based attention network for person re-identification
    Zhong, Weilin
    Jiang, Linfeng
    Zhang, Tao
    Ji, Jinsheng
    Xiong, Huilin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (31-32) : 22525 - 22549
  • [27] Dual semantic interdependencies attention network for person re-identification
    Yang, Shengrong
    Hu, Haifeng
    Chen, Dihu
    Su, Tao
    ELECTRONICS LETTERS, 2020, 56 (25) : 1411 - 1413
  • [28] Reverse Pyramid Attention Guidance Network for Person Re-Identification
    Liu, Jiang
    Bai, Wei
    Hui, Yun
    INTERNATIONAL JOURNAL OF COGNITIVE INFORMATICS AND NATURAL INTELLIGENCE, 2024, 18 (01)
  • [29] An efficient feature pyramid attention network for person re-identification
    Luo, Qian
    Shao, Jie
    Dang, Wanli
    Wang, Chao
    Cao, Libo
    Zhang, Tao
    IMAGE AND VISION COMPUTING, 2024, 145
  • [30] HPAN: A Hybrid Pose Attention Network for Person Re-Identification
    Huan, Ruohong
    Chen, Tianya
    Zhan, Ziwei
    Chen, Peng
    Liang, Ronghua
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 198 - 211