A new unified method for detecting text from marathon runners and sports players in video (PR-D-19-01078R2)

被引:15
|
作者
Nag, Sauradip [1 ]
Shivakumara, Palaiahnakote [2 ]
Pal, Umapada [3 ]
Lu, Tong [4 ]
Blumenstein, Michael [5 ]
机构
[1] Kalyani Govt Engn Coll, Kolkata, India
[2] Univ Malaya, Fac Comp Sci & Informat Technol, Kuala Lumpur, Malaysia
[3] Indian Stat Inst, Comp Vis & Pattern Recognit Unit, Kolkata, India
[4] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
[5] Univ Technol Sydney UTS, Fac Engn & Informat Technol, Sydney, NSW, Australia
基金
中国国家自然科学基金;
关键词
Video text analysis; Gradient direction; Bayesian classifier; Face detection; Torso detection; Deep learning; Text detection; RECOGNITION; TRACKING; FRAMEWORK;
D O I
10.1016/j.patcog.2020.107476
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detecting text located on the torsos of marathon runners and sports players in video is a challenging issue due to poor quality and adverse effects caused by flexible/colorful clothing, and different structures of human bodies or actions. This paper presents a new unified method for tackling the above challenges. The proposed method fuses gradient magnitude and direction coherence of text pixels in a new way for detecting candidate regions. Candidate regions are used for determining the number of temporal frame clusters obtained by K-means clustering on frame differences. This process in turn detects key frames. The proposed method explores Bayesian probability for skin portions using color values at both pixel and component levels of temporal frames, which provides fused images with skin components. Based on skin information, the proposed method then detects faces and torsos by finding structural and spatial coherences between them. We further propose adaptive pixels linking a deep learning model for text detection from torso regions. The proposed method is tested on our own dataset collected from marathon/sports video and three standard datasets, namely, RBNR, MMM and R-ID of marathon images, to evaluate the performance. In addition, the proposed method is also tested on the standard natural scene datasets, namely, CTW1500 and MS-COCO text datasets, to show the objectiveness of the proposed method. A comparative study with the state-of-the-art methods on bib number/text detection of different datasets shows that the proposed method outperforms the existing methods. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:17
相关论文
empty
未找到相关数据