A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification

被引:2
|
作者
Gao, Zan [1 ,2 ]
Wei, Hongwei [1 ]
Guan, Weili [3 ]
Nie, Jie [4 ]
Wang, Meng [5 ]
Chen, Shengyong [2 ]
机构
[1] Qilu Univ Technol, Shandong Artificial Intelligence Inst, Shandong Acad Sci, Jinan 250014, Peoples R China
[2] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[3] Monash Univ, Fac Informat Technol, Clayton, Vic 3800, Australia
[4] Ocean Univ China, Coll Informat Sci & Engn, Qingdao 266100, Peoples R China
[5] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Visualization; Semantics; Task analysis; Clothing; Pedestrians; Shape; Cloth-changing person re-identification (ReID); human semantic attention (HSA); semantic-aware; visual clothes shielding (VCS);
D O I
10.1109/TNNLS.2023.3329384
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cloth-changing person re-identification (ReID) is a newly emerging research topic that aims to retrieve pedestrians whose clothes are changed. Since the human appearance with different clothes exhibits large variations, it is very difficult for existing approaches to extract discriminative and robust feature representations. Current works mainly focus on body shape or contour sketches, but the human semantic information and the potential consistency of pedestrian features before and after changing clothes are not fully explored or are ignored. To solve these issues, in this work, a novel semantic-aware attention and visual shielding network for cloth-changing person ReID (abbreviated as SAVS) is proposed where the key idea is to shield clues related to the appearance of clothes and only focus on visual semantic information that is not sensitive to view/posture changes. Specifically, a visual semantic encoder is first employed to locate the human body and clothing regions based on human semantic segmentation information. Then, a human semantic attention (HSA) module is proposed to highlight the human semantic information and reweight the visual feature map. In addition, a visual clothes shielding (VCS) module is also designed to extract a more robust feature representation for the cloth-changing task by covering the clothing regions and focusing the model on the visual semantic information unrelated to the clothes. Most importantly, these two modules are jointly explored in an endto-end unified framework. Extensive experiments demonstrate that the proposed method can significantly outperform state-of- the-art methods, and more robust features can be extracted for cloth-changing persons. Compared with multibiometric unified network (MBUNet) (published in TIP2023), this method can achieve improvements of 17.5% (30.9%) and 8.5% (10.4%) on the LTCC and Celeb-reID datasets in terms of mean average precision (mAP) (rank-1), respectively. When compared with the Swin Transformer (Swin-T), the improvements can reach 28.6% (17.3%), 22.5% (10.0%), 19.5% (10.2%), and 8.6% (10.1%) on the PRCC, LTCC, Celeb, and NKUP datasets in terms of rank-1 (mAP), respectively.
引用
收藏
页码:1243 / 1257
页数:15
相关论文
共 50 条
  • [1] A Semantic-Aware Attention and Visual Shielding Network for Cloth-Changing Person Re-Identification
    Gao, Zan
    Wei, Hongwei
    Guan, Weili
    Nie, Jie
    Wang, Meng
    Chen, Shengyong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1243 - 1257
  • [2] Semantic-aware Consistency Network for Cloth-changing Person Re-Identification
    Guo, Peini
    Liu, Hong
    Wu, Jianbing
    Wang, Guoquan
    Wang, Tao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8730 - 8739
  • [3] Person Re-identification with a Cloth-Changing Aware Transformer
    Ren, Xuena
    Zhang, Dongming
    Bao, Xiuguo
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [4] A Cloth-Irrelevant Harmonious Attention Network for Cloth-Changing Person Re-identification
    Zhou, Zihui
    Liu, Hong
    Shi, Wei
    Tang, Hao
    Shi, Xingyue
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 989 - 995
  • [5] Multigranular Visual-Semantic Embedding for Cloth-Changing Person Re-identification
    Gao, Zan
    Wei, Hongwei
    Guan, Weili
    Nie, Weizhi
    Liu, Meng
    Wang, Meng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3703 - 3711
  • [6] Cloth-Changing Person Re-identification with Self-Attention
    Bansal, Vaibhav
    Foresti, Gian Luca
    Martinel, Niki
    2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, : 602 - 610
  • [7] Cloth-Aware Center Cluster Loss for Cloth-Changing Person Re-identification
    Li, Xulin
    Liu, Bin
    Lu, Yan
    Chu, Qi
    Yu, Nenghai
    PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2022, 2022, 13534 : 527 - 539
  • [8] Semantic-Guided Pixel Sampling for Cloth-Changing Person Re-Identification
    Shu, Xiujun
    Li, Ge
    Wang, Xiao
    Ruan, Weijian
    Tian, Qi
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1365 - 1369
  • [9] Pose-Guided Attention Learning for Cloth-Changing Person Re-Identification
    Liu, Xiangzeng
    Liu, Kunpeng
    Guo, Jianfeng
    Zhao, Peipei
    Quan, Yining
    Miao, Qiguang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5490 - 5498
  • [10] Attention-enhanced controllable disentanglement for cloth-changing person re-identification
    Ge, Yiyuan
    Yu, Mingxin
    Chen, Zhihao
    Lu, Wenshuai
    Dai, Yuxiang
    Shi, Huiyu
    VISUAL COMPUTER, 2024,