Bayesian Dumbbell Diffusion Model for RGBT Object Tracking With Enriched Priors

被引:4
|
作者
Fan, Shenghua [1 ]
He, Chu [2 ]
Wei, Chenxia [3 ]
Zheng, Yujin [1 ]
Chen, Xi [1 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Sch Elect Informat, Wuhan 430072, Peoples R China
[3] Shanghai Acad Spaceflight Technol, Shanghai 201100, Peoples R China
关键词
Bayesian; dumbbell diffusion models; plug-and-play; RGBT tracking;
D O I
10.1109/LSP.2023.3295758
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
RGBT tracking can be accomplished by constructing Bayesian estimators that incorporate fusion prior distributions for the visible (RGB) and thermal (T) modalities. Such estimators enable the computation of a posterior distribution for the variables of interest to locate the target. Incorporating rich prior information can improve the performance of predictors. However, current RGBT trackers face limited fusion prior data. To mitigate this issue, we propose a novel tracker, BD2 Track, which employs a diffusion model. Firstly, this letter introduces a dumbbell diffusion model, and employ convolution networks and the dumbbell model to derive the fusion feature prior information from various index frames in the same tracking video sequence. Secondly, we propose a plug-and-play channel augmented joint learning strategy to derive the images prior distribution. This strategy not only homogeneously generates modality-relevant prior information but also increases the distance between positive and negative samples within themodality, while reducing the distance between modalities during fusion. Results demonstrate promising performance in the GTOT, RGBT234, LasHeR, and VTUAV-ST datasets, surpassing other state-of-the-art trackers.
引用
收藏
页码:873 / 877
页数:5
相关论文
共 50 条
  • [1] Trans-RGBT:RGBT Object Tracking with Transformer
    Wanjun, Liu
    Linlin, Liang
    Haicheng, Qu
    Computer Engineering and Applications, 2024, 60 (11) : 84 - 94
  • [2] Visual and Language Collaborative Learning for RGBT Object Tracking
    Wang, Jiahao
    Liu, Fang
    Jiao, Licheng
    Gao, Yingjia
    Wang, Hao
    Li, Shuo
    Li, Lingling
    Chen, Puhua
    Liu, Xu
    IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (12) : 12770 - 12781
  • [3] Exploring the potential of Siamese network for RGBT object tracking
    Feng, Liangliang
    Song, Kechen
    Wang, Junyi
    Yan, Yunhui
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [4] Adaptive Fusion CNN Features for RGBT Object Tracking
    Wang, Yong
    Wei, Xian
    Tang, Xuan
    Shen, Hao
    Zhang, Huanlong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 7831 - 7840
  • [5] Exploring fusion strategies for accurate RGBT visual object tracking
    Tang, Zhangyong
    Xu, Tianyang
    Li, Hui
    Wu, Xiao-Jun
    Zhu, XueFeng
    Kittler, Josef
    INFORMATION FUSION, 2023, 99
  • [6] A BAYESIAN HIERARCHICAL APPEARANCE MODEL FOR ROBUST OBJECT TRACKING
    Almomani, Rued
    Dong, Ming
    Zhu, Dongxiao
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2016,
  • [8] Bayesian Diffusion Tensor Estimation with Spatial Priors
    Gu, Xuan
    Siden, Per
    Wegmann, Bertil
    Eklund, Anders
    Villani, Mattias
    Knutsson, Hans
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, 2017, 10424 : 372 - 383
  • [9] Bayesian priors in estimates of object location in virtual reality
    Cristina Sampaio
    Maria Jones
    Alexander Engelbertson
    Michael Williams
    Psychonomic Bulletin & Review, 2020, 27 : 1309 - 1316
  • [10] Bayesian priors in estimates of object location in virtual reality
    Sampaio, Cristina
    Jones, Maria
    Engelbertson, Alexander
    Williams, Michael
    PSYCHONOMIC BULLETIN & REVIEW, 2020, 27 (06) : 1309 - 1316