Multi-Layered Deep Learning Features Fusion for Human Action Recognition

被引:0
|
作者
Kiran, Sadia [1 ]
Khan, Muhammad Attique [1 ]
Javed, Muhammad Younus [1 ]
Alhaisoni, Majed [2 ]
Tariq, Usman [3 ]
Nam, Yunyoung [4 ]
Damasevicius, Robertas [5 ]
Sharif, Muhammad [6 ]
机构
[1] HITEC Univ Taxila, Dept Comp Sci, Taxila, Pakistan
[2] Univ Hail, Coll Comp Sci & Engn, Hail, Saudi Arabia
[3] Prince Sattam Bin Abdulaziz Univ, Coll Comp Engn & Sci, Al Khraj, Saudi Arabia
[4] Soonchunhyang Univ, Dept Comp Sci & Engn, Asan, South Korea
[5] Silesian Tech Univ, Fac Appl Math, Gliwice, Poland
[6] COMSATS Univ Islamabad, Dept Comp Sci, Wah Campus, Wah Cantt, Pakistan
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2021年 / 69卷 / 03期
关键词
Action recognition; transfer learning; features fusion; features selection; classification; CNN; TRAJECTORIES; AUTOENCODER; DENSE;
D O I
10.32604/cmc.2021.017800
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human Action Recognition (HAR) is an active research topic in machine learning for the last few decades. Visual surveillance, robotics, and pedestrian detection are the main applications for action recognition. Computer vision researchers have introduced many HAR techniques, but they still face challenges such as redundant features and the cost of computing. In this article, we proposed a new method for the use of deep learning for HAR. In the proposed method, video frames are initially pre-processed using a global contrast approach and later used to train a deep learning model using domain transfer learning. The Resnet-50 Pre-Trained Model is used as a deep learning model in this work. Features are extracted from two layers: Global Average Pool (GAP) and Fully Connected (FC). The features of both layers are fused by the Canonical Correlation Analysis (CCA). Then features are selected using the Shanon Entropy-based threshold function. The selected features are finally passed to multiple classifiers for final classification. Experiments are conducted on five publicly available datasets as IXMAS, UCF Sports, YouTube, UT-Interaction, and KTH. The accuracy of these data sets was 89.6%, 99.7%, 100%, 96.7% and 96.6%, respectively. Comparison with existing techniques has shown that the proposed method provides improved accuracy for HAR. Also, the proposed method is computationally fast based on the time of execution.
引用
收藏
页码:4061 / 4075
页数:15
相关论文
共 50 条
  • [1] Multi-Layered Deep Learning Features Fusion for Human Action Recognition
    Kiran, Sadia
    Khan, Muhammad Attique
    Javed, Muhammad Younus
    Alhaisoni, Majed
    Tariq, Usman
    Nam, Yunyoung
    Damaševǐcius, Robertas
    Sharif, Muhammad
    [J]. Computers, Materials and Continua, 2021, 69 (03): : 4061 - 4075
  • [2] Human-action recognition using a multi-layered fusion scheme of Kinect modalities
    Seddik, Bassem
    Gazzah, Sami
    Ben Amara, Najoua Essoukri
    [J]. IET COMPUTER VISION, 2017, 11 (07) : 530 - 540
  • [3] Multi-controller fusion in multi-layered reinforcement learning
    Takahashi, Y
    Asada, M
    [J]. MFI2001: INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, 2001, : 7 - 12
  • [4] Feature Fusion of Deep Spatial Features and Handcrafted Spatiotemporal Features for Human Action Recognition
    Uddin, Md Azher
    Lee, Young-Koo
    [J]. SENSORS, 2019, 19 (07)
  • [5] Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
    Khan, Seemab
    Khan, Muhammad Attique
    Alhaisoni, Majed
    Tariq, Usman
    Yong, Hwan-Seung
    Armghan, Ammar
    Alenezi, Fayadh
    [J]. SENSORS, 2021, 21 (23)
  • [6] Deep learning network model based on fusion of spatiotemporal features for action recognition
    Ge Yang
    Wu-xing Zou
    [J]. Multimedia Tools and Applications, 2022, 81 : 9875 - 9896
  • [7] Deep learning network model based on fusion of spatiotemporal features for action recognition
    Yang, Ge
    Zou, Wu-xing
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (07) : 9875 - 9896
  • [8] Multi-Level Deep Learning Depth and Color Fusion for Action Recognition
    Zelensky, A.
    Voronin, V.
    Zhdanova, M.
    Gapon, N.
    Tokareva, O.
    Semenishchev, E.
    [J]. OPTICS, PHOTONICS AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS VII, 2022, 12138
  • [9] Human Action Recognition Based on Fusion Features
    Yang, Shiqiang
    Yang, Jiangtao
    Li, Fei
    Fan, Guohao
    Li, Dexin
    [J]. CYBER SECURITY INTELLIGENCE AND ANALYTICS, 2020, 928 : 569 - 579
  • [10] Jointly Learning Multi-view Features for Human Action Recognition
    Wang, Ruoshi
    Liu, Zhigang
    Yin, Ziyang
    [J]. PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 4858 - 4861