Dense saliency-based spatiotemporal feature points for action recognition

被引:0
|
作者
Rapantzikos, Konstantinos [1 ]
Avrithis, Yannis [1 ]
Kollias, Stefanos [1 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, GR-10682 Athens, Greece
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely saliency, cornerness, periodicity, motion activity etc. Each of these measures is usually intensity-based and provides a different trade-off between density and informativeness. In this paper, we use saliency for feature point detection in videos and incorporate color and motion apart from intensity. Our method uses a multi-scale volumetric representation of the video and involves spatiotemporal operations at the voxel level. Saliency is computed by a global minimization process constrained by pure volumetric constraints, each of them being related to an informative visual aspect, namely spatial proximity, scale and feature similarity (intensity, color, motion). Points are selected as the extrema of the saliency response and prove to balance well between density and informativeness. We provide an intuitive view of the detected points and visual comparisons against state-of-the-art space-time detectors. Our detector outperforms them on the KTH dataset using Nearest-Neighbor classifiers and ranks among the top using different classification frameworks. Statistics and comparisons are also performed on the more difficult Hollywood Human Actions (HOHA) dataset increasing the performance compared to current published results.
引用
收藏
页码:1454 / 1461
页数:8
相关论文
共 50 条
  • [21] A Novel Feature Fusion Technique in Saliency-Based Visual Attention
    Armanfard, Zeynab
    Bahmani, Hamed
    Nasrabadi, Ali Motie
    2009 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTATIONAL TOOLS FOR ENGINEERING APPLICATIONS, 2009, : 230 - +
  • [22] Saliency-based scene recognition based on growing competitive neural network
    Atsumi, M
    2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS, 2003, : 2863 - 2870
  • [23] Human-Body Action Recognition Based on Dense Trajectories and Video Saliency
    Gao Deyong
    Kang Zibing
    Wang Song
    Wang Yangping
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (24)
  • [24] Comparison of feature combination strategies for saliency-based visual attention systems
    Itti, Laurent
    Koch, Christof
    Proceedings of SPIE - The International Society for Optical Engineering, 3644 : 473 - 482
  • [25] A comparison of feature combination strategies for saliency-based visual attention systems
    Itti, L
    Koch, C
    HUMAN VISION AND ELECTRONIC IMAGING IV, 1999, 3644 : 473 - 482
  • [26] Saliency-based multi-feature modeling for semantic image retrieval
    Bai, Cong
    Chen, Jia-nan
    Huang, Ling
    Kpalma, Kidiyo
    Chen, Shengyong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 50 : 199 - 204
  • [27] SALIENCY-BASED FEATURE SELECTION STRATEGY IN STEREOSCOPIC PANORAMIC VIDEO GENERATION
    Wang, Haoyu
    Sandin, Daniel J.
    Schonfeld, Dan
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 1837 - 1841
  • [28] Recurrent Spatiotemporal Feature Learning for Action Recognition
    Chen, Ze
    Lu, Hongtao
    ICRAI 2018: PROCEEDINGS OF 2018 4TH INTERNATIONAL CONFERENCE ON ROBOTICS AND ARTIFICIAL INTELLIGENCE -, 2018, : 12 - 17
  • [29] Spatiotemporal feature enhancement network for action recognition
    Huang, Guancheng
    Wang, Xiuhui
    Li, Xuesheng
    Wang, Yaru
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (19) : 57187 - 57197
  • [30] Saliency-based Discriminant Tracking
    Mahadevan, Vijay
    Vasconcelos, Nuno
    CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2009, : 1007 - 1013