Human action recognition using Kinect multimodal information

被引:0
|
作者
Tang, Chao [1 ]
Zhang, Miao-hui [2 ]
Wang, Xiao-feng [1 ]
Li, Wei [3 ]
Cao, Feng [4 ]
Hu, Chun-ling [1 ]
机构
[1] Hefei Univ, Dept Comp Sci & Technol, 99 Jinxiu Ave, Hefei 230601, Anhui, Peoples R China
[2] Jiangxi Acad Sci, Inst Energy, 7777 Changdong Ave, Nanchang 330096, Jiangxi, Peoples R China
[3] Xiamen Univ Technol, Sch Comp & Informat Engn, 600 Ligong Rd, Xiamen 361024, Fujian, Peoples R China
[4] Shanxi Univ, Sch Comp & Informat Technol, 92 Wucheng Rd, Taiyuan 030006, Shanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
human action recognition; Kinect sensor; multimodal features; k nearest neighbor classifier;
D O I
10.1117/12.2505416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the successful introduction and popularization of Kinect, it has been widely applied in intelligent surveillance, human-machine interaction and human action recognition and so on. This paper presents a human action recognition based on multimodal information using the Kinect sensor. Firstly, the HOG feature based on RGB modal information, the space-time interest points feature based on depth modal information, and the human body joints relative position feature based on skeleton modal information are extracted respectively for expressing human action. Then, the three kinds of nearest neighbor classifiers with different distance measurement formulas are used to predict the class label for a test sample which is respectively expressed by three different modal features. The experimental results show that the proposed method is simple, fast and efficient compared with other action recognition algorithms on public datasets.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Human action recognition based on Kinect data principal component analysis
    Shandong Provincial Key Laboratory of Network Based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan
    250022, China
    Guangxue Jingmi Gongcheng, (703-712):
  • [22] Landmark-based multimodal human action recognition
    Stylianos Asteriadis
    Petros Daras
    Multimedia Tools and Applications, 2017, 76 : 4505 - 4521
  • [23] Landmark-based multimodal human action recognition
    Asteriadis, Stylianos
    Daras, Petros
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (03) : 4505 - 4521
  • [24] System for multimodal data acquisition for human action recognition
    Filip Malawski
    Jakub Gałka
    Multimedia Tools and Applications, 2018, 77 : 23825 - 23850
  • [25] System for multimodal data acquisition for human action recognition
    Malawski, Filip
    Galka, Jakub
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (18) : 23825 - 23850
  • [26] Action Recognition from Depth Video Sequences Using Microsoft Kinect
    Lahan, Gautam Shankar
    Talukdar, Anjan Kumar
    Sarma, Kandarpa Kumar
    2019 FIFTH INTERNATIONAL CONFERENCE ON IMAGE INFORMATION PROCESSING (ICIIP 2019), 2019, : 35 - 40
  • [27] Multimodal information fusion based human movement recognition
    Shu, Yao
    Zhang, Heng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (7-8) : 5043 - 5052
  • [28] Multimodal information fusion based human movement recognition
    Yao Shu
    Heng Zhang
    Multimedia Tools and Applications, 2020, 79 : 5043 - 5052
  • [29] Multimodal human action recognition based on spatio-temporal action representation recognition model
    Wu, Qianhan
    Huang, Qian
    Li, Xing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (11) : 16409 - 16430
  • [30] Multimodal Human Action Recognition Based on a Fusion of Dynamic Images using CNN descriptors
    Cardenas, Edwin Escobedo
    Chavez, Guillermo Camara
    PROCEEDINGS 2018 31ST SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 2018, : 95 - 102