An Improved Two-stream 3D Convolutional Neural Network for Human Action Recognition

被引:3
|
作者
Chen, Jun [1 ]
Xu, Yuanping [1 ]
Zhang, Chaolong [1 ,2 ]
Xu, Zhijie [2 ]
Meng, Xiangxiang [1 ]
Wang, Jie [1 ]
机构
[1] Chengdu Univ Informat Technol, Sch Software Engn, Chengdu, Peoples R China
[2] Univ Huddersfield, Sch Comp & Engn, Huddersfield, W Yorkshire, England
关键词
Optical Flow; Human Action Recognition; Two-stream CNN; Three-dimensional CNN;
D O I
10.23919/iconac.2019.8894962
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In order to obtain global contextual information precisely from videos with heavy camera motions and scene changes, this study proposes an improved spatiotemporal two-stream neural network architecture with a novel convolutional fusion layer. The three main improvements of this study are: 1) the Resnet-101 network has been integrated into the two streams of the target network independently; 2) two kinds of feature maps (i.e., the optical flow motion and RGB-channel information) obtained by the corresponding convolution layer of two streams respectively are superimposed on each other; 3) the temporal information is combined with the spatial information by the integrated three-dimensional (3D) convolutional neural network (CNN) to extract more latent information from the videos. The proposed approach was tested by using UCF-101 and HMDB51 benchmarking datasets and the experimental results show that the proposed two-stream 3D CNN model can gain substantial improvement on the recognition rate in video-based analysis.
引用
收藏
页码:135 / 140
页数:6
相关论文
共 50 条
  • [1] Improving human action recognition with two-stream 3D convolutional neural network
    Van-Minh Khong
    Thanh-Hai Tran
    [J]. 2018 1ST INTERNATIONAL CONFERENCE ON MULTIMEDIA ANALYSIS AND PATTERN RECOGNITION (MAPR), 2018,
  • [2] 3D Convolutional Two-Stream Network for Action Recognition in Videos
    Li, Min
    Qi, Yuezhu
    Yang, Jian
    Zhang, Yanfang
    Ren, Junxing
    Du, Hong
    [J]. 2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1697 - 1701
  • [3] Transferable two-stream convolutional neural network for human action recognition
    Xiong, Qianqian
    Zhang, Jianjing
    Wang, Peng
    Liu, Dongdong
    Gao, Robert X.
    [J]. JOURNAL OF MANUFACTURING SYSTEMS, 2020, 56 : 605 - 614
  • [4] Improved human action recognition approach based on two-stream convolutional neural network model
    Congcong Liu
    Jie Ying
    Haima Yang
    Xing Hu
    Jin Liu
    [J]. The Visual Computer, 2021, 37 : 1327 - 1341
  • [5] Improved human action recognition approach based on two-stream convolutional neural network model
    Liu, Congcong
    Ying, Jie
    Yang, Haima
    Hu, Xing
    Liu, Jin
    [J]. VISUAL COMPUTER, 2021, 37 (06): : 1327 - 1341
  • [6] Two-Stream Convolutional Neural Network for Video Action Recognition
    Qiao, Han
    Liu, Shuang
    Xu, Qingzhen
    Liu, Shouqiang
    Yang, Wanggan
    [J]. KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (10): : 3668 - 3684
  • [7] Two-Stream 3D Convolution Attentional Network for Action Recognition
    Kusumoseniarto, Raden Hadapiningsyah
    [J]. 2020 JOINT 9TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2020 4TH INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), 2020,
  • [8] Action Recognition Using Action Sequences Optimization and Two-Stream 3D Dilated Neural Network
    Xiong, Xin
    Min, Weidong
    Han, Qing
    Wang, Qi
    Zha, Cheng
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [9] Human Action Recognition Based on a Two-stream Convolutional Network Classifier
    Silva, Vincius de Oliveira
    Vidal, Flavio de Barros
    Soares Romariz, Alexandre Ricardo
    [J]. 2017 16TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2017, : 774 - 778
  • [10] Human Action Recognition Based on Improved Two-Stream Convolution Network
    Wang, Zhongwen
    Lu, Haozhu
    Jin, Junlan
    Hu, Kai
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (12):