Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking

被引:84
|
作者
Zhang, Pengyu [1 ,2 ,3 ]
Zhao, Jie [1 ,2 ]
Bo, Chunjuan [4 ]
Wang, Dong [1 ,2 ,3 ]
Lu, Huchuan [1 ,2 ,5 ]
Yang, Xiaoyun [6 ]
机构
[1] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, Ningbo Inst, Ningbo 315016, Peoples R China
[3] Xidian Univ, Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[4] Dalian Minzu Univ, Coll Informat & Commun Engn, Dalian 116600, Peoples R China
[5] Peng Cheng Lab, Shenzhen 518066, Peoples R China
[6] Remark AI UK Ltd, London E14 9SH, England
基金
中国国家自然科学基金;
关键词
Target tracking; Cameras; Radar tracking; Task analysis; Switches; Lighting; Kalman filters; Visual tracking; RGB-T tracking; multimodal fusion; OBJECT TRACKING; PARTICLE FILTER; FUSION;
D O I
10.1109/TIP.2021.3060862
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this study, we propose a novel RGB-T tracking framework by jointly modeling both appearance and motion cues. First, to obtain a robust appearance model, we develop a novel late fusion method to infer the fusion weight maps of both RGB and thermal (T) modalities. The fusion weights are determined by using offline-trained global and local multimodal fusion networks, and then adopted to linearly combine the response maps of RGB and T modalities. Second, when the appearance cue is unreliable, we comprehensively take motion cues, i.e., target and camera motions, into account to make the tracker robust. We further propose a tracker switcher to switch the appearance and motion trackers flexibly. Numerous results on three recent RGB-T tracking datasets show that the proposed tracker performs significantly better than other state-of-the-art algorithms.
引用
收藏
页码:3335 / 3347
页数:13
相关论文
共 50 条
  • [1] Jointly modeling association and motion cues for robust infrared UAV tracking
    Xu, Boyue
    Hou, Ruichao
    Bei, Jia
    Ren, Tongwei
    Wu, Gangshan
    [J]. VISUAL COMPUTER, 2024,
  • [2] Region Selective Fusion Network for Robust RGB-T Tracking
    Yu, Zhencheng
    Fan, Huijie
    Wang, Qiang
    Li, Ziwan
    Tang, Yandong
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1357 - 1361
  • [3] Online Learning Samples and Adaptive Recovery for Robust RGB-T Tracking
    Liu, Jun
    Luo, Zhongqiang
    Xiong, Xingzhong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (02) : 724 - 737
  • [4] ROBUST RGB-T TRACKING VIA CONSISTENCY REGULATED SCENE PERCEPTION
    Kang, Bin
    Liu, Liwei
    Zhao, Shihao
    Du, Songlin
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 510 - 514
  • [5] Channel Exchanging for RGB-T Tracking
    Zhao, Long
    Zhu, Meng
    Ren, Honge
    Xue, Lingjixuan
    [J]. SENSORS, 2021, 21 (17)
  • [6] RGB-T object tracking: Benchmark and baseline
    Li, Chenglong
    Liang, Xinyan
    Lu, Yijuan
    Zhao, Nan
    Tang, Jin
    [J]. PATTERN RECOGNITION, 2019, 96
  • [7] Dynamic Tracking Aggregation with Transformers for RGB-T Tracking
    Liu, Xiaohu
    Lei, Zhiyong
    [J]. JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2023, 19 (01): : 80 - 88
  • [8] RGB-T tracking with frequency hybrid awareness
    Lei, Lei
    Li, Xianxian
    [J]. Image and Vision Computing, 2024, 152
  • [9] Toward Modalities Correlation for RGB-T Tracking
    Hu, Xiantao
    Zhong, Bineng
    Liang, Qihua
    Zhang, Shengping
    Li, Ning
    Li, Xianxian
    [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (10) : 9102 - 9111
  • [10] An RGB-T Object Tracking Method for Solving Camera Motion Based on Correlation Filter
    Zhao, Zhongxuan
    Li, Weixing
    Pan, Feng
    [J]. 2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 3526 - 3531