Co-Saliency Spatio-Temporal Interaction Network for Person Re-Identification in Videos

被引:0
|
作者
Liu, Jiawei [1 ]
Zha, Zheng-Jun [1 ]
Zhu, Xierong [1 ]
Jiang, Na [2 ]
机构
[1] Univ Sci & Technol China, Chengdu, Peoples R China
[2] Capital Normal Univ, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Person re-identification aims at identifying a certain pedestrian across non-overlapping camera networks. Video-based person re-identification approaches have gained significant attention recently, expanding image-based approaches by learning features from multiple frames. In this work, we propose a novel Co-Saliency Spatio-Temporal Interaction Network (CSTNet) for person re-identification in videos. It captures the common salient foreground regions among video frames and explores the spatial-temporal long-range context interdependency from such regions, towards learning discriminative pedestrian representation. Specifically, multiple co-saliency learning modules within CSTNet are designed to utilize the correlated information across video frames to extract the salient features from the task-relevant regions and suppress background interference. Moreover, multiple spatial-temporal interaction modules within CSTNet are proposed, which exploit the spatial and temporal long-range context interdependencies on such features and spatial-temporal information correlation, to enhance feature representation. Extensive experiments on two benchmarks have demonstrated the effectiveness of the proposed method.
引用
收藏
页码:1012 / 1018
页数:7
相关论文
共 50 条
  • [1] Person Re-identification in Videos by Analyzing Spatio-temporal Tubes
    Sekh, Arif Ahmed
    Dogra, Debi Prosad
    Choi, Heeseung
    Chae, Seungho
    Kim, Ig-Jae
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (33-34) : 24537 - 24551
  • [2] Person Re-identification in Videos by Analyzing Spatio-temporal Tubes
    Arif Ahmed Sekh
    Debi Prosad Dogra
    Heeseung Choi
    Seungho Chae
    Ig-Jae Kim
    Multimedia Tools and Applications, 2020, 79 : 24537 - 24551
  • [3] Deep Spatio-temporal Network for Accurate Person Re-identification
    Quan Nguyen Hong
    Nghia Nguyen Tuan
    Trung Tran Quang
    Dung Nguyen Tien
    Cuong Vo Le
    2017 PROCEEDINGS OF KICS-IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATIONS WITH SAMSUNG LTE & 5G SPECIAL WORKSHOP, 2017, : 208 - 213
  • [4] ASTA-Net: Adaptive Spatio-Temporal Attention Network for Person Re-Identification in Videos
    Zhu, Xierong
    Liu, Jiawei
    Wu, Haoze
    Wang, Meng
    Zha, Zheng-Jun
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1706 - 1715
  • [5] A spatio-temporal covariance descriptor for person re-identification
    Hadjkacem, Bassem
    Ayedi, Walid
    Abid, Mohamed
    Snoussi, Hichem
    2015 15TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS (ISDA), 2015, : 618 - 622
  • [6] Fusing Appearance and Spatio-Temporal Models for Person Re-Identification and Tracking
    Chen, Andrew Tzer-Yeu
    Biglari-Abhari, Morteza
    Wang, Kevin I-Kai
    JOURNAL OF IMAGING, 2020, 6 (05)
  • [7] Attribute saliency network for person re-identification
    Tay, Chiat-Pin
    Yap, Kim-Hui
    IMAGE AND VISION COMPUTING, 2021, 115
  • [8] Spatio-Temporal Representation Factorization for Video-based Person Re-Identification
    Aich, Abhishek
    Zheng, Meng
    Karanam, Srikrishna
    Chen, Terrence
    Roy-Chowdhury, Amit K.
    Wu, Ziyan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 152 - 162
  • [9] Person Re-identification Based on Deep Spatio-temporal Features and Transfer Learning
    Wang, Shengke
    Zhang, Cui
    Duan, Lianghua
    Wang, Lina
    Wu, Shan
    Chen, Long
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 1660 - 1665
  • [10] Progressive Unsupervised Person Re-Identification by Tracklet Association With Spatio-Temporal Regularization
    Xie, Qiaokang
    Zhou, Wengang
    Qi, Guo-Jun
    Tian, Qi
    Li, Houqiang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 597 - 610