Multifeature fusion action recognition based on key frames

被引:3
|
作者
Zhao, Yuerong [1 ,2 ]
Guo, Hongbo [1 ,2 ]
Gao, Ling [1 ,2 ,3 ]
Wang, Hai [1 ,2 ]
Zheng, Jie [1 ,2 ]
Zhang, Kan [1 ,2 ]
Zheng, Yong [1 ,2 ]
机构
[1] Northwest Univ, Sch Informat & Technol, Xian 710127, Peoples R China
[2] Northwest Univ, State Prov Joint Engn & Res Ctr Adv Networking &, Xian, Peoples R China
[3] Xian Polytech Univ, Xian, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
action recognition; joint point contribution; key frame extraction; multifeature fusion;
D O I
10.1002/cpe.6137
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As an important technology in computer vision, video-based human action recognition has a great commercial value, which has attracted extensive attention in the field of computer vision and pattern recognition in both academia and industry. To date, there are a wide variety of applications of human action recognition, such as surveillance, robotics, health care, video searching, and human-computer interaction. However, there are many challenges involved in human action recognition in videos, such as cluttered backgrounds, occlusions, viewpoint variation, execution rate, and camera motion. However, data redundancy and single feature were largely limited the accuracy of human action recognition. In this article, adopting the key frame extraction and multifeature fusion techniques, a novel action recognition method was proposed, which can improve the recognition accuracy. The main works are as follows: 1) in order to solve the problem of data redundancy, a key frame extraction method based on node contribution weighting is proposed to extract video key frames; 2) different kinds of information flows are extracted from the obtained key frame sequences, and different convolutional neural networks are used to obtain corresponding classification results and merge, so as to better complement the information in different flows. Lastly, the experimental results show that our method improves the accuracy of action recognition.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Human Action Recognition Based on Multifeature Fusion
    Zhang, Shasha
    Zhang, Weicun
    Li, Yunluo
    [J]. PROCEEDINGS OF 2016 CHINESE INTELLIGENT SYSTEMS CONFERENCE, VOL II, 2016, 405 : 183 - 192
  • [2] Multi-feature Fusion Action Recognition Based on Key Frames
    Zhao, Yuerong
    Gao, Ling
    He, Dan
    Guo, Hongbo
    Wang, Hai
    Zheng, Jie
    Yang, Xudong
    [J]. 2019 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD), 2019, : 279 - 284
  • [3] Human Action Recognition Based on Key Frames
    Hu, Yong
    Zheng, Wei
    [J]. ADVANCES IN COMPUTER SCIENCE AND EDUCATION APPLICATIONS, PT II, 2011, 202 : 535 - 542
  • [4] A Face Recognition Method Based on Multifeature Fusion
    Ye, Shengxi
    [J]. JOURNAL OF SENSORS, 2022, 2022
  • [5] Action recognition using weighted fusion of depth images and skeleton's key frames
    Xu, Yan
    Hou, Zhenjie
    Liang, Jiuzhen
    Chen, Chen
    Jia, Liang
    Song, Yi
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (17) : 25063 - 25078
  • [6] Action Recognition Using Weighted Fusion of Depth Images and Skeleton’s Key Frames
    权重融合深度图像与骨骼关键帧的行为识别
    [J]. Hou, Zhenjie (houzj@cczu.edu.cn), 2018, Institute of Computing Technology (30):
  • [7] Action recognition using weighted fusion of depth images and skeleton’s key frames
    Yan Xu
    Zhenjie Hou
    Jiuzhen Liang
    Chen Chen
    Liang Jia
    Yi Song
    [J]. Multimedia Tools and Applications, 2019, 78 : 25063 - 25078
  • [8] Face Recognition Based on Wavelet Transform and Multifeature Fusion Coding
    Guo Xiucai
    Cong Haoran
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (12)
  • [9] Human Pose Recognition Based on Depth Image Multifeature Fusion
    Wang, Haikuan
    Zhou, Feixiang
    Zhou, Wenju
    Chen, Ling
    [J]. COMPLEXITY, 2018,
  • [10] Multifeature Fusion for Facial Expression Recognition
    Wunake, Patrick
    Boante, Leonard Mensah
    Wilson, Matilda Serwaa
    Appati, Justice Kwame
    [J]. COMMUNICATION AND INTELLIGENT SYSTEMS, VOL 1, ICCIS 2023, 2024, 967 : 157 - 168