Train bogie part recognition with multi-object multi-template matching adaptive algorithm

被引:6
|
作者
Sasikala, N. [1 ]
Kishore, P. V. V. [2 ]
机构
[1] KL Univ, Dept Elect & Commun Engn, Vaddeswaram, Andhra Pradesh, India
[2] KL Univ, Dept Elect & Commun Engn, Image Speech & Signal Proc Res Grp, Vaddeswaram 522502, Andhra Pradesh, India
关键词
Rolling stock automation; Multiple template matching; High speed video;
D O I
10.1016/j.jksuci.2017.10.001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automation of train rolling stock monitoring system by recognizing the bogie parts is a process of identifying defects in the train undercarriage moving at >30 Kmph. Recognizing the parts of a moving train using computer vision models based on color and texture with deformable curve segmentation models is a challenging and computationally intensive. A multi object multi-template model is proposed to solve this problem in a computationally less intensive process. A multi object multi template library with 26 objects in 40 bogie frames was created based on statistical parameters of the bogie part in the center of the frame through template extraction model. Fifty templates were designed from 250 frames of the first bogie movement through the camera plane. Maximum normalized cross correlation coefficient calculated on each frame with a 26 - by - 40 template matrices identifies the bogie parts in the frame in a single computation. High speed recording of the train bogies at 240 fps establishes the datasets for experimentation having 2 trains with 20 coaches each capturing 15,000 frames per train. The correct recognition accuracy is 91% with a false recognition rate of 15%. (C) 2017 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
引用
收藏
页码:608 / 617
页数:10
相关论文
共 50 条
  • [1] Multi-template Matching Algorithm Based on Adaptive Fusion
    Li, Bing
    Su, Juan
    Chen, Dan
    Wu, Wei
    [J]. IMAGE AND GRAPHICS (ICIG 2017), PT I, 2017, 10666 : 602 - 613
  • [2] Multi-template matching algorithm for cucumber recognition in natural environment
    Bao Guanjun
    Cai Shibo
    Qi Liyong
    Xun Yi
    Zhang Libin
    Yang Qinghua
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2016, 127 : 754 - 762
  • [3] Joint Template Matching Algorithm for Associated Multi-object Detection
    Xie, Jianbin
    Liu, Tong
    Chen, Zhangyong
    Zhuang, Zhaowen
    [J]. KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2012, 6 (01): : 395 - 405
  • [4] An Adaptive Dynamic Multi-Template Correlation Filter for Robust Object Tracking
    Hung, Kuo-Ching
    Lin, Sheng-Fuu
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (20):
  • [5] A multi-object tracking algorithm based on adaptive pattern matching and offset estimating
    Luo, Sanding
    Tan, Weige
    [J]. WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 657 - 657
  • [6] Multi-template matching: a versatile tool for object-localization in microscopy images
    Laurent S. V. Thomas
    Jochen Gehrig
    [J]. BMC Bioinformatics, 21
  • [7] Multi-template matching: a versatile tool for object-localization in microscopy images
    Thomas, Laurent S. V.
    Gehrig, Jochen
    [J]. BMC BIOINFORMATICS, 2020, 21 (01)
  • [8] Algorithm of Locally Adaptive Region Growing Based on Multi-Template Matching Applied to Automated Detection of Hemorrhages
    Gao Wei-wei
    Shen Jian-xin
    Wang Yu-liang
    Liang Chun
    Zuo Jing
    [J]. SPECTROSCOPY AND SPECTRAL ANALYSIS, 2013, 33 (02) : 448 - 453
  • [9] Development of a multi-object tracking algorithm with untrained features of object matching
    Gorbachev, V. A.
    Kalugin, V. F.
    [J]. COMPUTER OPTICS, 2023, 47 (06) : 1002 - +
  • [10] Research on multi-object recognition algorithm based on video
    Hou, Yong
    Mao, Runhua
    Yu, Yan
    Ouyang, Yuxing
    Bian, Ce
    Song, Binhu
    Wei, Baochang
    Qin, Yiqiao
    Chang, Shengbiao
    Dai, Fengzhi
    Jiao, Hongwei
    [J]. ICAROB 2018: PROCEEDINGS OF THE 2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL LIFE AND ROBOTICS, 2018, : 655 - 658