Discriminative Feature Learning With Co-Occurrence Attention Network for Vehicle ReID

被引:19
|
作者
Sheng, Hao [1 ,2 ,3 ]
Wang, Shuai [2 ,4 ]
Chen, Haobo [5 ]
Yang, Da [2 ,4 ]
Huang, Yang [5 ]
Shen, Jiahao
Ke, Wei [3 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[2] Beihang Univ, Zhongfa Aviat Inst, Hangzhou 311115, Peoples R China
[3] Macao Polytech Univ, Fac Appl Sci, Macau 999078, Peoples R China
[4] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
[5] ByteDance, Beijing 100089, Peoples R China
关键词
Feature extraction; Task analysis; Videos; Representation learning; Pipelines; Image color analysis; Visualization; Vehicle re-identification; co-occurrence attention; discriminative learning; image representation;
D O I
10.1109/TCSVT.2023.3326375
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vehicle Re-Identification (ReID) aims to find images of the same vehicle from different videos. It remains a challenging task in the video analysis field due to the huge appearance discrepancy of the same vehicle in cross-view matching and the subtle difference of different similar vehicles in same-view matching. In this paper, we propose a Co-occurrence Attention Net (CAN) to deal with these two challenges. Specifically, CAN consists of two branches, a main branch and an aware branch. The main branch is in charge of extracting global features that are consistent in most views. This feature encodes holistic information such as color and pose, however, it can not handle cross/same-view hard cases, as shown in Fig.1. Therefore, the aware branch is designed to focus on the local details and viewpoint information, which can become an important complement for those hard cases. Considering that the positions of local areas such as wheels and logos change with the viewpoint, Aware Attention Module is introduced to find the hidden relationship among local areas and seamlessly combine the viewpoint information simultaneously. Then, CAN is trained by a partition-and-reunion-based loss, which can narrow the intra-class distance and increase the inter-class distance. Further, an adaptive co-occurrence view emphasize strategy is adopted to fully utilize the learned features. Experimental results on three widely used datasets including VeRi-776, VehicleID and VERI-Wild demonstrate the effectiveness of our method and competitive performance with other state-of-the-art methods.
引用
收藏
页码:3510 / 3522
页数:13
相关论文
共 50 条
  • [1] Discriminative feature co-occurrence selection for object detection
    Mita, Takeshi
    Kaneko, Toshimitsu
    Stenger, Bjorn
    Hori, Osamu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (07) : 1257 - 1269
  • [2] Learning a Discriminative Feature Attention Network for pancreas CT segmentation
    HUANG Meixiang
    WANG Yuanjin
    HUANG Chongfei
    YUAN Jing
    KONG Dexing
    Applied Mathematics:A Journal of Chinese Universities, 2022, 37 (01) : 73 - 90
  • [3] Learning a Discriminative Feature Attention Network for pancreas CT segmentation
    Mei-xiang Huang
    Yuan-jin Wang
    Chong-fei Huang
    Jing Yuan
    De-xing Kong
    Applied Mathematics-A Journal of Chinese Universities, 2022, 37 : 73 - 90
  • [4] Learning a Discriminative Feature Attention Network for pancreas CT segmentation
    Huang Mei-xiang
    Wang Yuan-jin
    Huang Chong-fei
    Yuan Jing
    Kong De-xing
    APPLIED MATHEMATICS-A JOURNAL OF CHINESE UNIVERSITIES SERIES B, 2022, 37 (01) : 73 - 90
  • [5] DISCRIMINATIVE REGIONAL COLOR CO-OCCURRENCE DESCRIPTOR
    Zou, Qin
    Qi, Xianbiao
    Li, Qingquan
    Wang, Song
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 696 - 700
  • [6] Deep Co-occurrence Feature Learning for Visual Object Recognition
    Shih, Ya-Fang
    Yeh, Yang-Ming
    Lin, Yen-Yu
    Weng, Ming-Fang
    Lu, Yi-Chang
    Chuang, Yung-Yu
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7302 - 7311
  • [7] An attentional spatial temporal graph convolutional network with co-occurrence feature learning for action recognition
    Dong Tian
    Zhe-Ming Lu
    Xiao Chen
    Long-Hua Ma
    Multimedia Tools and Applications, 2020, 79 : 12679 - 12697
  • [8] An attentional spatial temporal graph convolutional network with co-occurrence feature learning for action recognition
    Tian, Dong
    Lu, Zhe-Ming
    Chen, Xiao
    Ma, Long-Hua
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (17-18) : 12679 - 12697
  • [9] Discriminative Co-Occurrence of Concept Features for Action Recognition
    Zhou, Tongchi
    Xu, Qinjun
    Hamdulla, Askar
    2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND PATTERN RECOGNITION (AIPR 2018), 2018, : 6 - 10
  • [10] Contrast enhancement based on discriminative co-occurrence statistics
    Wu, X.
    Sun, Y.
    Kawanishi, T.
    Kashino, K.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (04) : 6413 - 6442