CONVOLUTIONAL TEMPORAL ATTENTION MODEL FOR VIDEO-BASED PERSON RE-IDENTIFICATION

被引:4
|
作者
Rahman, Tanzila [1 ]
Rochan, Mrigank [2 ]
Wang, Yang [2 ]
机构
[1] Univ British Columbia, Vancouver, BC, Canada
[2] Univ Manitoba, Winnipeg, MB, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Attention network; FCN; temporal attention; re-identification; semantic segmentation;
D O I
10.1109/ICME.2019.00193
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The goal of video-based person re-identification is to match two input videos, so that the distance of the two videos is small if two videos contain the same person. A common approach for person re-identification is to first extract image features for all frames in the video, then aggregate all the features to form a video-level feature. The video-level features of two videos can then be used to calculate the distance of the two videos. In this paper, we propose a temporal attention approach for aggregating frame-level features into a video-level feature vector for re-identification. Our method is motivated by the fact that not all frames in a video are equally informative. We propose a fully convolutional temporal attention model for generating the attention scores. Fully convolutional network (FCN) has been widely used in semantic segmentation for generating 2D output maps. In this paper, we formulate video based person re-identification as a sequence labeling problem like semantic segmentation. We establish a connection between them and modify FCN to generate attention scores to represent the importance of each frame. Extensive experiments on three different benchmark datasets (i.e. iLIDS-VID, PRID-2011 and SDU-VID) show that our proposed method outperforms other state-of-the-art approaches.
引用
收藏
页码:1102 / 1107
页数:6
相关论文
共 50 条
  • [41] Attention-guided spatial-temporal graph relation network for video-based person re-identification
    Qi, Yu
    Ge, Hongwei
    Pei, Wenbin
    Liu, Yuxuan
    Hou, Yaqing
    Sun, Liang
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (19): : 14227 - 14241
  • [42] Spatial-temporal graph-guided global attention network for video-based person re-identification
    Xiaobao Li
    Wen Wang
    Qingyong Li
    Jiang Zhang
    Machine Vision and Applications, 2024, 35
  • [43] HASI: Hierarchical Attention-Aware Spatio-Temporal Interaction for Video-Based Person Re-Identification
    Chen, Si
    Da, Hui
    Wang, Da-Han
    Zhang, Xu-Yao
    Yan, Yan
    Zhu, Shunzhi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4973 - 4988
  • [44] Spatial-temporal graph-guided global attention network for video-based person re-identification
    Li, Xiaobao
    Wang, Wen
    Li, Qingyong
    Zhang, Jiang
    MACHINE VISION AND APPLICATIONS, 2024, 35 (01)
  • [45] Learning Recurrent 3D Attention for Video-Based Person Re-Identification
    Chen, Guangyi
    Lu, Jiwen
    Yang, Ming
    Zhou, Jie
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 6963 - 6976
  • [46] Local and global aligned spatiotemporal attention network for video-based person re-identification
    Li Cheng
    Xiao-Yuan Jing
    Xiaoke Zhu
    Chang-Hui Hu
    Guangwei Gao
    Songsong Wu
    Multimedia Tools and Applications, 2020, 79 : 34489 - 34512
  • [47] Co-segmentation Inspired Attention Networks for Video-based Person Re-identification
    Subramaniam, Arulkumar
    Nambiar, Athira
    Mittal, Anurag
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 562 - 572
  • [48] What-Where-When Attention Network for video-based person re-identification
    Zhang, Chenrui
    Chen, Ping
    Lei, Tao
    Wu, Yangxu
    Meng, Hongying
    NEUROCOMPUTING, 2022, 468 : 33 - 47
  • [49] Local and global aligned spatiotemporal attention network for video-based person re-identification
    Cheng, Li
    Jing, Xiao-Yuan
    Zhu, Xiaoke
    Hu, Chang-Hui
    Gao, Guangwei
    Wu, Songsong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (45-46) : 34489 - 34512
  • [50] k-Reciprocal Harmonious Attention Network for Video-Based Person Re-Identification
    Su, Xinxing
    Qu, Xiaoye
    Zou, Zhikang
    Zhou, Pan
    Wei, Wei
    Wen, Shiping
    Hu, Menglan
    IEEE ACCESS, 2019, 7 : 22457 - 22470