Spatio-temporal interactive fusion based visual object tracking method

被引:0
|
作者
Huang, Dandan [1 ]
Yu, Siyu [1 ]
Duan, Jin [1 ]
Wang, Yingzhi [1 ]
Yao, Anni [1 ]
Wang, Yiwen [1 ]
Xi, Junhan [1 ]
机构
[1] Changchun Univ Sci & Technol, Coll Elect Informat Engn, Changchun, Peoples R China
基金
中国国家自然科学基金;
关键词
object tracking; spatio-temporal context; feature enhancement; feature fusion; attention mechanism;
D O I
10.3389/fphy.2023.1269638
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Visual object tracking tasks often struggle with utilizing inter-frame correlation information and handling challenges like local occlusion, deformations, and background interference. To address these issues, this paper proposes a spatio-temporal interactive fusion (STIF) based visual object tracking method. The goal is to fully utilize spatio-temporal background information, enhance feature representation for object recognition, improve tracking accuracy, adapt to object changes, and reduce model drift. The proposed method incorporates feature-enhanced networks in both temporal and spatial dimensions. It leverages spatio-temporal background information to extract salient features that contribute to improved object recognition and tracking accuracy. Additionally, the model's adaptability to object changes is enhanced, and model drift is minimized. A spatio-temporal interactive fusion network is employed to learn a similarity metric between the memory frame and the query frame by utilizing feature enhancement. This fusion network effectively filters out stronger feature representations through the interactive fusion of information. The proposed tracking method is evaluated on four challenging public datasets. The results demonstrate that the method achieves state-of-the-art (SOTA) performance and significantly improves tracking accuracy in complex scenarios affected by local occlusion, deformations, and background interference. Finally, the method achieves a remarkable success rate of 78.8% on TrackingNet, a large-scale tracking dataset.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] UAV Visual Object Tracking Based on Spatio-Temporal Context
    He, Yongxiang
    Chao, Chuang
    Zhang, Zhao
    Guo, Hongwu
    Ma, Jianjun
    DRONES, 2024, 8 (12)
  • [2] ViT Spatio-Temporal Feature Fusion for Aerial Object Tracking
    Guo, Chuangye
    Liu, Kang
    Deng, Donghu
    Li, Xuelong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 6749 - 6761
  • [3] Unified spatio-temporal attention mixformer for visual object tracking
    Park, Minho
    Yoon, Gang-Joon
    Song, Jinjoo
    Yoon, Sang Min
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 134
  • [4] Memory Prompt for Spatio-Temporal Transformer Visual Object Tracking
    Xu T.
    Wu X.
    Zhu X.
    Kittler J.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (08): : 1 - 6
  • [5] Exploring reliable infrared object tracking with spatio-temporal fusion transformer
    Qi, Meibin
    Wang, Qinxin
    Zhuang, Shuo
    Zhang, Ke
    Li, Kunyuan
    Liu, Yimin
    Yang, Yanfang
    KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [6] Learning spatio-temporal discriminative model for affine subspace based visual object tracking
    Tianyang Xu
    Xue-Feng Zhu
    Xiao-Jun Wu
    Visual Intelligence, 1 (1):
  • [7] Spatio-temporal feature fusion based correlative binary relevance for visual object detection
    Amaresh, M.
    Chitrakala, S.
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (05):
  • [8] Aberrance suppressed spatio-temporal correlation filters for visual object tracking
    Elayaperumal, Dinesh
    Joo, Young Hoon
    PATTERN RECOGNITION, 2021, 115
  • [9] Foreground Object Detection in Visual Surveillance With Spatio-Temporal Fusion Network
    Kim, Jae-Yeul
    Ha, Jong-Eun
    IEEE ACCESS, 2022, 10 : 122857 - 122869
  • [10] DASTSiam: Spatio-temporal fusion and discriminative enhancement for Siamese visual tracking
    Huang, Yucheng
    Firkat, Eksan
    Zhang, Jinlai
    Zhu, Lijuan
    Zhu, Bin
    Zhu, Jihong
    Hamdulla, Askar
    IET COMPUTER VISION, 2023, 17 (08) : 1017 - 1033