Learning Feature Recovery Transformer for Occluded Person Re-Identification

被引:36
|
作者
Xu, Boqiang [1 ,2 ]
He, Lingxiao [3 ]
Liang, Jian [1 ,2 ]
Sun, Zhenan [1 ,2 ]
机构
[1] Chinese Acad Sci, Ctr Res Intelligent Percept & Comp, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] AI Res JD, Beijing 100020, Peoples R China
基金
中国国家自然科学基金;
关键词
Occluded person re-identification; transformer; graph; occlusion recovery;
D O I
10.1109/TIP.2022.3186759
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One major issue that challenges person re-identification (Re-ID) is the ubiquitous occlusion over the captured persons. There are two main challenges for the occluded person Re-ID problem, i.e., the interference of noise during feature matching and the loss of pedestrian information brought by the occlusions. In this paper, we propose a new approach called Feature Recovery Transformer (FRT) to address the two challenges simultaneously, which mainly consists of visibility graph matching and feature recovery transformer. To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity. In terms of the second challenge, based on the developed graph similarity, for each query image, we propose a recovery transformer that exploits the feature sets of its k-nearest neighbors in the gallery to recover the complete features. Extensive experiments across different person Re-ID datasets, including occluded, partial and holistic datasets, demonstrate the effectiveness of FRT. Specifically, FRT significantly outperforms state-of-the-art results by at least 6.2% Rank- 1 accuracy and 7.2% mAP scores on the challenging Occluded-Duke dataset.
引用
收藏
页码:4651 / 4662
页数:12
相关论文
共 50 条
  • [31] Pose-Guided Feature Alignment for Occluded Person Re-Identification
    Miao, Jiaxu
    Wu, Yu
    Liu, Ping
    Ding, Yuhang
    Yang, Yi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 542 - 551
  • [32] Occluded person re-identification based on feature fusion and sparse reconstruction
    Gao, Fei
    Jin, Yiming
    Ge, Yisu
    Lu, Shufang
    Zhang, Yuanming
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 15061 - 15078
  • [33] Occluded person re-identification with deep learning: A survey and perspectives
    Ning, Enhao
    Wang, Changshuo
    Zhang, Huang
    Ning, Xin
    Tiwari, Prayag
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 239
  • [34] Parallel Dense Vision Transformer and Augmentation Network for Occluded Person Re-identification
    Yang, Chuxia
    Fan, Wanshu
    Wei, Ziqi
    Yang, Xin
    Zhang, Qiang
    Zhou, Dongsheng
    COMPUTER-AIDED DESIGN AND COMPUTER GRAPHICS, CAD/GRAPHICS 2023, 2024, 14250 : 138 - 153
  • [35] Focus and imagine: Occlusion suppression and repairing transformer for occluded person re-identification
    Zhang, Ziwen
    Han, Shoudong
    Liu, Donghaisheng
    Ming, Delie
    NEUROCOMPUTING, 2024, 578
  • [36] Focus and imagine: Occlusion suppression and repairing transformer for occluded person re-identification
    Zhang, Ziwen
    Han, Shoudong
    Liu, Donghaisheng
    Ming, Delie
    Neurocomputing, 2024, 578
  • [37] Dual-branch adaptive attention transformer for occluded person re-identification
    Lu, Yunhua
    Jiang, Mingzi
    Liu, Zhi
    Mu, Xinyu
    IMAGE AND VISION COMPUTING, 2023, 131
  • [38] Local-global aware-transformer for occluded person re-identification
    Liu, Jing
    Zhou, Guoqing
    ALEXANDRIA ENGINEERING JOURNAL, 2023, 84 : 71 - 78
  • [39] View Confusion Feature Learning for Person Re-identification
    Liu, Fangyi
    Zhang, Lei
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6638 - 6647
  • [40] Discriminative Spatial Feature Learning for Person Re-Identification
    Peng, Peixi
    Tian, Yonghong
    Huang, Yangru
    Wang, Xiangqian
    An, Huilong
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 274 - 283