Mutually Reinforced Spatio-Temporal Convolutional Tube for Human Action Recognition

被引:0
|
作者
Wu, Haoze [1 ]
Liu, Jiawei [1 ]
Zha, Zheng-Jun [1 ]
Chen, Zhenzhong [2 ]
Sun, Xiaoyan [3 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Brain Inspired Intelligence Technol, Beijing, Peoples R China
[2] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan, Peoples R China
[3] Microsoft Res Asia, Intelligent Multimedia Grp, Beijing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent works use 3D convolutional neural networks to explore spatio-temporal information for human action recognition. However, they either ignore the correlation between spatial and temporal features or suffer from high computational cost by spatio-temporal features extraction. In this work, we propose a novel and efficient Mutually Reinforced Spatio-Temporal Convolutional Tube (MRST) for human action recognition. It decomposes 3D inputs into spatial and temporal representations, mutually enhances both of them by exploiting the interaction of spatial and temporal information and selectively emphasizes informative spatial appearance and temporal motion, meanwhile reducing the complexity of structure. Moreover, we design three types of MRSTs according to the different order of spatial and temporal information enhancement, each of which contains a spatio-temporal decomposition unit, a mutually reinforced unit and a spatio-temporal fusion unit. An end-to-end deep network, MRST-Net, is also proposed based on the MRSTs to better explore spatiotemporal information in human actions. Extensive experiments show MRST-Net yields the best performance, compared to state-of-the-art approaches.
引用
收藏
页码:968 / 974
页数:7
相关论文
共 50 条
  • [41] SPATIO-TEMPORAL FASTMAP-BASED MAPPING FOR HUMAN ACTION RECOGNITION
    Belhadj, Lilia Chorfi
    Mignotte, Max
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 3046 - 3050
  • [42] Intelligent attendance monitoring system with spatio-temporal human action recognition
    Tsai, Ming-Fong
    Li, Min-Hao
    SOFT COMPUTING, 2023, 27 (08) : 5003 - 5019
  • [43] Human Action Recognition by SOM Considering the Probability of Spatio-temporal Features
    Ji, Yanli
    Shimada, Atsushi
    Taniguchi, Rin-ichiro
    NEURAL INFORMATION PROCESSING: MODELS AND APPLICATIONS, PT II, 2010, 6444 : 391 - 398
  • [44] Spatio-temporal Multi-level Fusion for Human Action Recognition
    Manh-Hung Lu
    Thi-Oanh Nguyen
    SOICT 2019: PROCEEDINGS OF THE TENTH INTERNATIONAL SYMPOSIUM ON INFORMATION AND COMMUNICATION TECHNOLOGY, 2019, : 298 - 305
  • [45] Spatio-Temporal Analysis for Human Action Detection and Recognition in Uncontrolled Environments
    Liu, Dianting
    Yan, Yilin
    Shyu, Mei-Ling
    Zhao, Guiru
    Chen, Min
    INTERNATIONAL JOURNAL OF MULTIMEDIA DATA ENGINEERING & MANAGEMENT, 2015, 6 (01): : 1 - 18
  • [46] Human action recognition using Local Spatio-Temporal Discriminant Embedding
    Jia, Kui
    Yeung, Dit-Yan
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 3040 - +
  • [47] Human Action Recognition in Video by Fusion of Structural and Spatio-temporal Features
    Borzeshi, Ehsan Zare
    Concha, Oscar Perez
    Piccardi, Massimo
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, 2012, 7626 : 474 - 482
  • [48] Local Spatio-Temporal Interest Point Detection for Human Action Recognition
    Li, Feng
    Du, Jixiang
    2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2012, : 579 - 582
  • [49] Vertex Feature Encoding and Hierarchical Temporal Modeling in a Spatio-Temporal Graph Convolutional Network for Action Recognition
    Papadopoulos, Konstantinos
    Ghorbel, Enjie
    Aouada, Djamila
    Ottersten, Bjoern
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 452 - 458
  • [50] Robust human action recognition based on spatio-temporal descriptors and motion temporal templates
    Dou, Jianfang
    Li, Jianxun
    OPTIK, 2014, 125 (07): : 1891 - 1896