Multi-object trajectory tracking

被引:1
|
作者
Mei Han
Wei Xu
Hai Tao
Yihong Gong
机构
[1] NEC Laboratories America,
[2] University of California,undefined
来源
关键词
Object Detection; Tracking Algorithm; Trajectory Tracking; Convolutional Neural Network; Foreground Pixel;
D O I
暂无
中图分类号
学科分类号
摘要
The majority of existing tracking algorithms are based on the maximum a posteriori solution of a probabilistic framework using a Hidden Markov Model, where the distribution of the object state at the current time instance is estimated based on current and previous observations. However, this approach is prone to errors caused by distractions such as occlusions, background clutters and multi-object confusions. In this paper, we propose a multiple object tracking algorithm that seeks the optimal state sequence that maximizes the joint multi-object state-observation probability. We call this algorithm trajectory tracking since it estimates the state sequence or “trajectory” instead of the current state. The algorithm is capable of tracking unknown time-varying number of multiple objects. We also introduce a novel observation model which is composed of the original image, the foreground mask given by background subtraction and the object detection map generated by an object detector. The image provides the object appearance information. The foreground mask enables the likelihood computation to consider the multi-object configuration in its entirety. The detection map consists of pixel-wise object detection scores, which drives the tracking algorithm to perform joint inference on both the number of objects and their configurations efficiently. The proposed algorithm has been implemented and tested extensively in a complete CCTV video surveillance system to monitor entries and detect tailgating and piggy-backing violations at access points for over six months. The system achieved 98.3% precision in event classification. The violation detection rate is 90.4% and the detection precision is 85.2%. The results clearly demonstrate the advantages of the proposed detection based trajectory tracking framework.
引用
收藏
页码:221 / 232
页数:11
相关论文
共 50 条
  • [1] Multi-object trajectory tracking
    Han, Mei
    Xu, Wei
    Tao, Hai
    Gong, Yihong
    [J]. MACHINE VISION AND APPLICATIONS, 2007, 18 (3-4) : 221 - 232
  • [2] Coupled detection and trajectory estimation for multi-object tracking
    Leibe, Bastian
    Schindler, Konrad
    Van Gool, Luc
    [J]. 2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, : 849 - 856
  • [3] Multi-Object Tracking Algorithm of Fusing Trajectory Compensation
    Jin, Jianhai
    Wang, Liming
    You, Qi
    Sun, Jun
    [J]. MATHEMATICS, 2022, 10 (15)
  • [4] Adaptive multi-object tracking algorithm based on split trajectory
    Sun, Lifan
    Li, Bingyu
    Gao, Dan
    Fan, Bo
    [J]. JOURNAL OF SUPERCOMPUTING, 2024, 80 (15): : 22287 - 22314
  • [5] Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching
    Nikolaos D. Doulamis
    [J]. Multimedia Tools and Applications, 2010, 50 : 173 - 198
  • [6] Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching
    Doulamis, Nikolaos D.
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2010, 50 (01) : 173 - 198
  • [7] Novel learning framework for optimal multi-object video trajectory tracking
    Siyuan CHEN
    Xiaowu HU
    Wenying JIANG
    Wen ZHOU
    Xintao DING
    [J]. 虚拟现实与智能硬件(中英文), 2023, 5 (05) : 422 - 438
  • [8] A Multi-Object Tracking Approach Combining Contextual Features and Trajectory Prediction
    Zhang, Peng
    Jing, Qingyang
    Zhao, Xinlei
    Dong, Lijia
    Lei, Weimin
    Zhang, Wei
    Lin, Zhaonan
    [J]. ELECTRONICS, 2023, 12 (23)
  • [9] Referring Multi-Object Tracking
    Wu, Dongming
    Han, Wencheng
    Wang, Tiancai
    Dong, Xingping
    Zhang, Xiangyu
    Shen, Jianbing
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 14633 - 14642
  • [10] Multi-object tracking in video
    Agbinya, JI
    Rees, D
    [J]. REAL-TIME IMAGING, 1999, 5 (05) : 295 - 304