Extracting Gait Figures in a Video based on Markerless Motion

被引:4
|
作者
Kusakunniran, Worapan [1 ]
机构
[1] Mahidol Univ, Fac Informat & Commun Technol, Bangkok 10700, Thailand
关键词
RECOGNITION; BIOMETRICS;
D O I
10.1109/KSE.2015.16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a new method to extract gait figures in a 2D video without using any markers. Such scenario is more feasible in a real-world environment than a traditional 3D cooperative multicamera system with reflective markers which is costly and complicated. The proposed method is developed to extract following information from a 2D gait video based on markerless motion: 1) a gait period; 2) key positions of a human body (i.e. head, waist, left-knee, right-knee, left-ankle, and right-ankle) in each frame within a gait period. This is processed by using statistical techniques including linear regression, parabolic regression and polynomial interpolation. Such extracted gait information is useful for many gait-based applications such as human identification in a surveillance system, injury analysis in a sport science, and disease detection and gait rehabilitation in a clinical area. The widely adopted CASIA gait database B is used to verify the proposed method. The extracted key positions are validated by comparing with a ground-truth which is manually generated by human observers. The experimental results demonstrate that the proposed method can achieve very promising performance.
引用
收藏
页码:306 / 309
页数:4
相关论文
共 50 条
  • [31] Assessment of spatiotemporal gait parameters using a deep learning algorithm-based markerless motion capture system
    Kanko, Robert M.
    Laende, Elise K.
    Strutzenberger, Gerda
    Brown, Marcus
    Selbie, W. Scott
    DePaul, Vincent
    Scott, Stephen H.
    Deluzio, Kevin J.
    JOURNAL OF BIOMECHANICS, 2021, 122
  • [32] Gait parameter and speed estimation from the frontal view gait video data based on the gait motion and spatial modeling
    Okusa, K. (k.okusa@me.com), 1600, International Association of Engineers (43):
  • [33] Remote Gait Type Classification System Using Markerless 2D Video
    Albuquerque, Pedro
    Machado, Joao Pedro
    Verlekar, Tanmay Tulsidas
    Correia, Paulo Lobato
    Soares, Luis Ducla
    DIAGNOSTICS, 2021, 11 (10)
  • [34] GAIT TRAINING AND THE USE OF A VIDEO MOTION ANALYZER
    GOLDBERG, G
    MAYER, NH
    MACKS, P
    KALSTEIN, R
    ARCHIVES OF PHYSICAL MEDICINE AND REHABILITATION, 1981, 62 (10): : 517 - 517
  • [35] Extracting representative motion flows for effective video retrieval
    Zhao, Zhe
    Cui, Bin
    Cong, Gao
    Huang, Zi
    Shen, Heng Tao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2012, 58 (03) : 687 - 711
  • [36] Extracting representative motion flows for effective video retrieval
    Zhe Zhao
    Bin Cui
    Gao Cong
    Zi Huang
    Heng Tao Shen
    Multimedia Tools and Applications, 2012, 58 : 687 - 711
  • [37] Feasibility of Markerless Motion Capture for Three-Dimensional Gait Assessment in Community Settings
    McGuirk, Theresa E.
    Perry, Elliott S.
    Sihanath, Wandasun B.
    Riazati, Sherveen
    Patten, Carolynn
    FRONTIERS IN HUMAN NEUROSCIENCE, 2022, 16
  • [38] Pose2Gait: Extracting Gait Features from Monocular Video of Individuals with Dementia
    Malin-Mayor, Caroline
    Adeli, Vida
    Sabo, Andrea
    Noritsyn, Sergey
    Gorodetsky, Carolina
    Fasano, Alfonso
    Iaboni, Andrea
    Taati, Babak
    PREDICTIVE INTELLIGENCE IN MEDICINE, PRIME 2023, 2023, 14277 : 265 - 276
  • [39] Feasibility and usefulness of video-based markerless two-dimensional automated gait analysis, in providing objective quantification of gait and complementing the evaluation of gait in children with cerebral palsy
    Pantzar-Castilla, Evelina
    Balta, Diletta
    Della Croce, Ugo
    Cereatti, Andrea
    Riad, Jacques
    BMC MUSCULOSKELETAL DISORDERS, 2024, 25 (01)
  • [40] Markerless 3D human motion tracking for monocular video sequences
    Zou, Beiji
    Chen, Shu
    Peng, Xiaoning
    Shi, Cao
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2008, 20 (08): : 1047 - 1055