Human action recognition using Kinect multimodal information

被引:0
|
作者
Tang, Chao [1 ]
Zhang, Miao-hui [2 ]
Wang, Xiao-feng [1 ]
Li, Wei [3 ]
Cao, Feng [4 ]
Hu, Chun-ling [1 ]
机构
[1] Hefei Univ, Dept Comp Sci & Technol, 99 Jinxiu Ave, Hefei 230601, Anhui, Peoples R China
[2] Jiangxi Acad Sci, Inst Energy, 7777 Changdong Ave, Nanchang 330096, Jiangxi, Peoples R China
[3] Xiamen Univ Technol, Sch Comp & Informat Engn, 600 Ligong Rd, Xiamen 361024, Fujian, Peoples R China
[4] Shanxi Univ, Sch Comp & Informat Technol, 92 Wucheng Rd, Taiyuan 030006, Shanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
human action recognition; Kinect sensor; multimodal features; k nearest neighbor classifier;
D O I
10.1117/12.2505416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the successful introduction and popularization of Kinect, it has been widely applied in intelligent surveillance, human-machine interaction and human action recognition and so on. This paper presents a human action recognition based on multimodal information using the Kinect sensor. Firstly, the HOG feature based on RGB modal information, the space-time interest points feature based on depth modal information, and the human body joints relative position feature based on skeleton modal information are extracted respectively for expressing human action. Then, the three kinds of nearest neighbor classifiers with different distance measurement formulas are used to predict the class label for a test sample which is respectively expressed by three different modal features. The experimental results show that the proposed method is simple, fast and efficient compared with other action recognition algorithms on public datasets.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Human action recognition based on kinect
    Yang, Mingya
    Lin, Zhanjian
    Tang, Weiwei
    Zheng, Lingxiang
    Zhou, Jianyang
    Journal of Computational Information Systems, 2014, 10 (12): : 5347 - 5354
  • [2] Fisherposes for Human Action Recognition Using Kinect Sensor Data
    Ghojogh, Benyamin
    Mohammadzade, Hoda
    Mokari, Mozhgan
    IEEE SENSORS JOURNAL, 2018, 18 (04) : 1612 - 1627
  • [3] HUMAN ACTION RECOGNITION BASED ON ACTION FORESTS MODEL USING KINECT CAMERA
    Chuan, Chi-Hung
    Chen, Ying-Nong
    Fan, Kuo-Chin
    IEEE 30TH INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS WORKSHOPS (WAINA 2016), 2016, : 914 - 917
  • [4] Learning skeleton information for human action analysis using Kinect
    Li, Gang
    Li, Chunyu
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 84
  • [5] Kinect and Episodic Reasoning for Human Action Recognition
    Cantarero, Ruben
    Santofimia, Maria J.
    Villa, David
    Requena, Roberto
    Campos, Maria
    Florez-Revuelta, Francisco
    Nebel, Jean-Christophe
    Martinez-del-Rincon, Jesus
    Lopez, Juan C.
    DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE, (DCAI 2016), 2016, 474 : 147 - 154
  • [6] Implementation of Human Action Recognition System Using Multiple Kinect Sensors
    Kwon, Beom
    Kim, Doyoung
    Kim, Junghwan
    Lee, Inwoong
    Kim, Jongyoo
    Oh, Heeseok
    Kim, Haksub
    Lee, Sanghoon
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT I, 2015, 9314 : 334 - 343
  • [7] Action and Digit Recognition of Finger Using Kinect
    Shang, Wanfeng
    Ma, Hongwei
    Zang, Hailong
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 724 - 728
  • [8] Joint Motion Similarity (JMS)-Based Human Action Recognition using Kinect
    Li, Jiawei
    Chen, Jianxin
    Sun, Linhui
    2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, : 206 - 213
  • [9] Using a Multilearner to Fuse Multimodal Features for Human Action Recognition
    Tang, Chao
    Hu, Huosheng
    Wang, Wenjian
    Li, Wei
    Peng, Hua
    Wang, Xiaofeng
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [10] Human action recognition using accumulated moving information
    Dept. Communication and Information, Chungbuk National University, Cheongju, Korea, Republic of
    不详
    Int. J. Multimedia Ubiquitous Eng., 10 (211-222):