Human appearance modeling for matching across video sequences

被引:16
|
作者
Yu, Yang
Harwood, David
Yoon, Kyongil
Davis, Larry S. [1 ]
机构
[1] Univ Maryland, Inst Adv Comp Studies, College Pk, MD 20742 USA
[2] McDaniel Coll, Dept Math & Comp Sci, Westminster, MD 21157 USA
关键词
visual surveillance; appearance modeling and matching; color path-length profile; Kullback-Leibler distance; key frame selection;
D O I
10.1007/s00138-006-0061-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an appearance model for establishing correspondence between tracks of people which may be taken at different places, at different times or across different cameras. The appearance model is constructed by kernel density estimation. To incorporate structural information and to achieve invariance to motion and pose, besides color features, an additional feature of path-length is used. To achieve illumination invariance, two types of illumination insensitive color features are discussed: brightness color feature and RGB rank feature. The similarity between a test image and an appearance model is measured by the information gain or Kullback-Leibler distance. To thoroughly represent the information contained in a video sequence with as little data as possible, a key frame selection and matching scheme is proposed. Experimental results demonstrate the important role of the path-length feature in the appearance model and the effectiveness of the proposed appearance model and matching method.
引用
收藏
页码:139 / 149
页数:11
相关论文
共 50 条
  • [41] An invariant representation for matching trajectories across uncalibrated video streams
    Nunziati, W
    Sclaroff, S
    Del Bimbo, A
    IMAGE AND VIDEO RETRIEVAL, PROCEEDINGS, 2005, 3568 : 318 - 327
  • [42] Appearance Capture and Modeling of Human Teeth
    Velinov, Zdravko
    Papas, Marios
    Bradley, Derek
    Gotardo, Paulo
    Mirdehghan, Parsa
    Marschner, Steve
    Novak, Jan
    Beeler, Thabo
    ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (06):
  • [43] Appearance Capture and Modeling of Human Teeth
    Velinov, Zdravko
    Papas, Marios
    Bradley, Derek
    Gotardo, Paulo
    Mirdehghan, Parsa
    Marschner, Steve
    Novak, Jan
    Beeler, Thabo
    SIGGRAPH ASIA'18: SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018,
  • [44] Panoramic appearance-based recognition of video contents using matching graphs
    Chen, CS
    Hsieh, WT
    Chen, JH
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2004, 34 (01): : 179 - 199
  • [45] PERFORMANCE EVALUATION OF POINT MATCHING METHODS IN VIDEO SEQUENCES WITH ABRUPT MOTIONS
    Elloumi, Wael
    Treuillet, Sylvie
    Leconge, Remy
    Fonte, Aicha
    VISAPP 2010: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 1, 2010, : 427 - 430
  • [46] Pose-Appearance Relational Modeling for Video Action Recognition
    Cui, Mengmeng
    Wang, Wei
    Zhang, Kunbo
    Sun, Zhenan
    Wang, Liang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 295 - 308
  • [47] Fast Appearance Modeling for Automatic Primary Video Object Segmentation
    Yang, Jiong
    Price, Brian
    Shen, Xiaohui
    Lin, Zhe
    Yuan, Junsong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (02) : 503 - 515
  • [48] Tracking and modeling raindrops in video sequences for assessing precipitation
    Chen, Chih-Yen
    Weng, Chun-Jen
    Hwang, Chi-Hung
    Hsieh, Chi-Wen
    Tsung, Shuen-Jong
    2016 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND VIRTUAL ENVIRONMENTS FOR MEASUREMENT SYSTEMS AND APPLICATIONS (CIVEMSA), 2016, : 30 - 35
  • [49] Background Modeling for Video Sequences by Stacked Denoising Autoencoders
    Garcia-Gonzalez, Jorge
    Ortiz-de-Lazcano-Lobato, Juan M.
    Luque-Baena, Rafael M.
    Molina-Cabello, Miguel A.
    Lopez-Rubio, Ezequiel
    ADVANCES IN ARTIFICIAL INTELLIGENCE, CAEPIA 2018, 2018, 11160 : 341 - 350
  • [50] Reality modeling and visualization from multiple video sequences
    Moezzi, S
    Katkere, A
    Kuramura, DY
    Jain, R
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 1996, 16 (06) : 58 - 63