SiamPAT: Siamese point attention networks for robust visual tracking

被引:0
|
作者
Chen, Hang [1 ]
Zhang, Weiguo [1 ]
Yan, Danghui [1 ]
机构
[1] Northwestern Polytech Univ, Automat Coll, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
visual tracking; attention mechanism; Siamese point attention; object attention; OBJECT TRACKING;
D O I
10.1117/1.JEI.30.5.053001
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Attention mechanism originates from the study of human visual behavior, which has been widely used in various fields of artificial intelligence in recent years and has become an important part of neural network structure. Many attention mechanism-based trackers have gained improved performance in both accuracy and robustness. However, these trackers cannot suppress the influence of background information and distractors accurately and do not enhance the target object information, which limits the performance of these trackers. We propose new Siamese point attention (SPA) networks for robust visual tracking. SPA networks learn position attention and channel attention jointly on two branch information. To construct point attention, each point on the template feature is used to calculate the similarity on the search feature. The similarity calculation is based on the local information of the target object, which can reduce the influence of background, deformation, and rotation factors. We can obtain the region of interest by calculating the position attention from point attention. Position attention is integrated into the calculation of channel attention to reduce the influence of irrelevant areas. In addition, we also propose the object attention, and integrate it into the classification and regression module to further enhance the semantic information of the target object and improve the tracking accuracy. Extensive experiments are also conducted on five benchmark datasets. The experiment results show that our method achieves state-of-the-art performance. (C) 2021 SPIE and IS&T
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Siamese Graph Attention Networks for robust visual object tracking
    Lu, Junjie
    Li, Shengyang
    Guo, Weilong
    Zhao, Manqi
    Yang, Jian
    Liu, Yunfei
    Zhou, Zhuang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 229
  • [2] COMPLEMENTARY SIAMESE NETWORKS FOR ROBUST VISUAL TRACKING
    Fan, Heng
    Xu, Lu
    Xiang, Jinhai
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2247 - 2251
  • [3] Deformable Siamese Attention Networks for Visual Object Tracking
    Yu, Yuechen
    Xiong, Yilei
    Huang, Weilin
    Scott, Matthew R.
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6727 - 6736
  • [4] Siamese Attention Networks with Adaptive Templates for Visual Tracking
    Zhang, Bo
    Liang, Zhixue
    Dong, Wenyong
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [5] Triple attention and global reasoning Siamese networks for visual tracking
    Ping Shu
    Keying Xu
    Hua Bao
    Machine Vision and Applications, 2022, 33
  • [6] Triple attention and global reasoning Siamese networks for visual tracking
    Shu, Ping
    Xu, Keying
    Bao, Hua
    MACHINE VISION AND APPLICATIONS, 2022, 33 (04)
  • [7] SGAT: Shuffle and graph attention based Siamese networks for visual tracking
    Wang, Jun
    Zhang, Limin
    Zhang, Wenshuang
    Wang, Yuanyun
    Deng, Chengzhi
    PLOS ONE, 2022, 17 (11):
  • [8] SiamAtt: Siamese attention network for visual tracking
    Yang, Kai
    He, Zhenyu
    Zhou, Zikun
    Fan, Nana
    KNOWLEDGE-BASED SYSTEMS, 2020, 203
  • [9] Evolution of Siamese Visual Tracking with Slot Attention
    Wang, Jian
    Ye, Xiangzhou
    Wu, Dongjie
    Gong, Jinfu
    Tang, Xinyi
    Li, Zheng
    ELECTRONICS, 2024, 13 (03)
  • [10] Keypoint prediction enhanced Siamese networks with attention for accurate visual object tracking
    Sakthi, K. S. Sachin
    Joo, Young Hoon
    Jeong, Jae Hoon
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268