Centralized Information Fusion with Limited Multi-View for Multi-Object Tracking

被引:0
|
作者
Liu, Minti [1 ]
Zeng, Cao [1 ]
Zhao, Shihua [1 ]
Li, Shidong [2 ]
机构
[1] Xidian Univ, Natl Lab Radar Signal Proc, Xian 710071, Shaanxi, Peoples R China
[2] San Francisco State Univ, Dept Math, San Francisco, CA 94132 USA
关键词
multi-object tracking; centralized information fusion; finite set statistics; labeled multi-Bernoulli filter; limited multi-view; radar network; Gibbs sampling; BERNOULLI; IMPLEMENTATION; DERIVATION; FILTERS;
D O I
10.1117/12.2626840
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In practical radar detection applications, due to the limitation of the beam width of the pattern, limited field of view (FOV) lacks the overall perception ability of the area of interest (AOI). Especially, when unknown and time-varying targets appear in AOI, it can easily lead to missing even wrong tracking of key objects. In view of the above problems, the radar network is adopted to fuse the observation data of limited multi-view to obtain the global field of view information, and then realize the trajectories estimation of multi-object in the fusion center. Based on FInite Set STatistics (FISST) framework, mapping the newborn and death process of multiple targets within FOVs as multi-Bernoulli process, the posteriori density of multi-objects is propagated recursively followed Bayesian criterion in time. The simulation results of multi-object trajectories estimation with four kinds of multi-Bernoulli (MB) filters are given under three scenarios, which illustrates that the number of interest objects and the accuracy of trajectories estimation are improved, along with the increase of the number of local observation fields of view. Furthermore, the tracking performance of labeled multi-Bernoulli (LMB) filter is superior to that of unlabeled filter.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] AIPT: Adaptive information perception for online multi-object tracking
    Zhang, Yukuan
    Xie, Housheng
    Jia, Yunhua
    Meng, Jingrui
    Sang, Meng
    Qiu, Junhui
    Zhao, Shan
    Yang, Yang
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 285
  • [42] Information bottleneck fusion for deep multi-view clustering
    Hu, Jie
    Yang, Chenghao
    Huang, Kai
    Wang, Hongjun
    Peng, Bo
    Li, Tianrui
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 289
  • [43] Multi-view information fusion in mammograms: A comprehensive overview
    Jouirou, Amira
    Baazaoui, Abir
    Barhoumi, Walid
    [J]. INFORMATION FUSION, 2019, 52 : 308 - 321
  • [44] Multi-view object tracking using sequential belief propagation
    Du, W
    Piater, J
    [J]. COMPUTER VISION - ACCV 2006, PT I, 2006, 3851 : 684 - 693
  • [45] Multi-view ToF Fusion for Object Detection in Industrial Applications
    Coudron, Inge
    Goedeme, Toon
    [J]. PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 4, 2017, : 203 - 207
  • [46] Robust Multi-Modality Multi-Object Tracking
    Zhang, Wenwei
    Zhou, Hui
    Sun, Shuyang
    Wang, Zhe
    Shi, Jianping
    Loy, Chen Change
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2365 - 2374
  • [47] Multi-Object Tracking Via Multi-Attention
    Wang, Xianrui
    Ling, Hefei
    Chen, Jiazhong
    Li, Ping
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [48] MULTI-OBJECT TRACKING AS ATTENTION MECHANISM
    Fukui, Hiroshi
    Miyagawa, Taiki
    Morishita, Yusuke
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 505 - 509
  • [49] Multi-object tracking of human spermatozoa
    Sorensen, Lauge
    Ostergaard, Jakob
    Johansen, Peter
    de Bruijne, Marleen
    [J]. MEDICAL IMAGING 2008: IMAGE PROCESSING, PTS 1-3, 2008, 6914
  • [50] SFFSORT Multi-Object Tracking by Shallow Feature Fusion for Vehicle Counting
    Zhonglin, Tian
    Wahab, Mohd Nadhir Ab
    Akbar, Muhammad Firdaus
    Mohamed, Ahmad Sufril Azlan
    Noor, Mohd Halim Mohd
    Rosdi, Bakhtiar Affendi
    [J]. IEEE ACCESS, 2023, 11 : 76827 - 76841