A View-Invariant Action Recognition Based on Multi-View Space Hidden Markov Models

被引:3
|
作者
Ji, Xiaofei [1 ]
Wang, Ce [1 ]
Li, Yibo [1 ]
机构
[1] Shenyang Aerosp Univ, Sch Automat, Shenyang, Peoples R China
基金
中国国家自然科学基金;
关键词
Action recognition; view-invariant; view space partition; hidden Markov models;
D O I
10.1142/S021984361450011X
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Visual-based action recognition has already been widely used in human-machine interfaces. However, it is a challenging research to recognize the human actions from different viewpoints. In order to solve this issue, a novel multi-view space hidden Markov models (HMMs) algorithm for view-invariant action recognition is proposed. First, a view-insensitive feature representation by combining the bag-of-words of interest point with the amplitude histogram of optical flow is utilized for describing the human action sequences. The combined features could not only solve the problem that there was no possibility in establishing an association between traditional bag-of-words of interest point method and HMMs, but also greatly reduce the redundancy in the video. Second, the view space is partitioned into multiple sub-view space according to the camera rotation viewpoint. Human action models are trained by HMMs algorithm in each sub-view space. By computing the probabilities of the test sequence (i.e., observation sequence) for the given multi-view space HMMs, the similarity between the sub-view space and the test sequence viewpoint are analyzed during the recognition process. Finally, the action with unknown viewpoint is recognized via the probability weighted combination. The experimental results on multi-view action dataset IXMAS demonstrate that the proposed approach is highly efficient and effective in view-invariant action recognition.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Multi-view invariant shape recognition based on neural networks
    Yawichai, Kritsana
    Kitjaidure, Yuttana
    ICIEA 2008: 3RD IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, PROCEEDINGS, VOLS 1-3, 2008, : 1538 - 1542
  • [32] DVANet: Disentangling View and Action Features for Multi-View Action Recognition
    Siddiqui, Nyle
    Tirupattur, Praveen
    Shah, Mubarak
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4873 - 4881
  • [33] Novel Cross-View Human Action Model Recognition Based on the Powerful View-Invariant Features Technique
    Mambou, Sebastien
    Krejcar, Ondrej
    Kuca, Kamil
    Selamat, Ali
    FUTURE INTERNET, 2018, 10 (09)
  • [34] Cross-domain learned view-invariant representation for cross-view action recognition
    Li, Yandi
    Li, Mengdi
    Zhao, Zhihao
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [35] Global-Local Cross-View Fisher Discrimination for View-invariant Action Recognition
    Gao, Lingling
    Ji, Yanli
    Yang, Yang
    Shen, Heng Tao
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5255 - 5264
  • [36] A Discriminative Model of Motion and Cross Ratio for View-Invariant Action Recognition
    Huang, Kaiqi
    Zhang, Yeying
    Tan, Tieniu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (04) : 2187 - 2197
  • [37] View-Invariant Action Recognition Using Latent Kernelized Structural SVM
    Wu, Xinxiao
    Jia, Yunde
    COMPUTER VISION - ECCV 2012, PT V, 2012, 7576 : 411 - 424
  • [38] VIEW-INVARIANT ACTION RECOGNITION USING CROSS RATIOS ACROSS FRAMES
    Zhang, Yeyin
    Huang, Kaiqi
    Huang, Yongzhen
    Tan, Tieniu
    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6, 2009, : 3549 - 3552
  • [39] View knowledge transfer network for multi-view action recognition
    Liang, Zixi
    Yin, Ming
    Gao, Junli
    He, Yicheng
    Huang, Weitian
    IMAGE AND VISION COMPUTING, 2022, 118
  • [40] View-invariant action recognition via Unsupervised AttentioN Transfer (UANT)
    Ji, Yanli
    Yang, Yang
    Shen, Heng Tao
    Harada, Tatsuya
    PATTERN RECOGNITION, 2021, 113