TATrack: Target-aware transformer for object tracking

被引:2
|
作者
Huang, Kai [1 ]
Chu, Jun [1 ]
Leng, Lu [1 ]
Dong, Xingbo [2 ]
机构
[1] Nanchang Hangkong Univ, Key Lab Jiangxi Prov Image Proc & Pattern Recognit, Nanchang 330063, Peoples R China
[2] Anhui Univ, Sch Artificial Intelligence, Hefei 230093, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Object tracking; Siamese visual tracker; Self-attention; Deformable attention; Target-aware tracking;
D O I
10.1016/j.engappai.2023.107304
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vision transformers have recently been adapted for object tracking and achieved promising performances owing to global correlation modeling using a self-attention mechanism. However, self-attention in existing trackers pays equal attention to the foreground and background, leading to a limited discriminative ability because attention is not target-aware. Existing solutions suffer from issues associated with information loss and the introduction of additional noise. This study proposes a Transformer-based Siamese tracking architecture integrated with deformable attention called TATrack. The TATrack can focus on the most relevant information about the target in the search region by adaptively selecting the positions of the key and value pairs, thereby reducing the information loss and additional noise. Experiments demonstrate that TATrack outperforms state-of-the-art models by a significant margin on GOT-10k, TrackingNet, LaSOT, and OTB100, with comparable processing speeds. The source code and pretrained models are available at www.github.com/Kevoen/TATrack.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Target-Aware Transformer for Satellite Video Object Tracking
    Lai, Pujian
    Zhang, Meili
    Cheng, Gong
    Li, Shengyang
    Huang, Xiankai
    Han, Junwei
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 10
  • [2] Target-Aware Transformer Tracking
    Zheng, Yuhui
    Zhang, Yan
    Xiao, Bin
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4542 - 4551
  • [3] Know Who You Are: Learning Target-Aware Transformer for Object Tracking
    Zou, Zhuojun
    Liu, Xuexin
    Zhang, Yuanpei
    Shu, Lin
    Hao, Jie
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1427 - 1432
  • [4] Target-aware transformer tracking with hard occlusion instance generation
    Xiao, Dingkun
    Wei, Zhenzhong
    Zhang, Guangjun
    [J]. FRONTIERS IN NEUROROBOTICS, 2024, 17
  • [5] TabCtNet: Target-aware bilateral CNN-transformer network for single object tracking in satellite videos
    Zhu, Qiqi
    Huang, Xin
    Guan, Qingfeng
    [J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 128
  • [6] Target-Aware Deep Tracking
    Li, Xin
    Ma, Chao
    Wu, Baoyuan
    He, Zhenyu
    Yang, Ming-Hsuan
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1369 - 1378
  • [7] Knowledge Distillation via the Target-aware Transformer
    Lin, Sihao
    Xie, Hongwei
    Wang, Bing
    Yu, Kaicheng
    Chang, Xiaojun
    Liang, Xiaodan
    Wang, Gang
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10905 - 10914
  • [8] Focal DETR: Target-Aware Token Design for Transformer-Based Object Detection
    Xie, Tianming
    Zhang, Zhonghao
    Tian, Jing
    Ma, Lihong
    [J]. SENSORS, 2022, 22 (22)
  • [9] Target-Aware State Estimation for Visual Tracking
    Zhou, Zikun
    Li, Xin
    Fan, Nana
    Wang, Hongpeng
    He, Zhenyu
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 2908 - 2920
  • [10] Target-Aware Correlation Filter Tracking in RGBD Videos
    Kuai, Yangliu
    Wen, Gongjian
    Li, Dongdong
    Xiao, Jingjing
    [J]. IEEE SENSORS JOURNAL, 2019, 19 (20) : 9522 - 9531