Dense saliency-based spatiotemporal feature points for action recognition

被引:0
|
作者
Rapantzikos, Konstantinos [1 ]
Avrithis, Yannis [1 ]
Kollias, Stefanos [1 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, GR-10682 Athens, Greece
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely saliency, cornerness, periodicity, motion activity etc. Each of these measures is usually intensity-based and provides a different trade-off between density and informativeness. In this paper, we use saliency for feature point detection in videos and incorporate color and motion apart from intensity. Our method uses a multi-scale volumetric representation of the video and involves spatiotemporal operations at the voxel level. Saliency is computed by a global minimization process constrained by pure volumetric constraints, each of them being related to an informative visual aspect, namely spatial proximity, scale and feature similarity (intensity, color, motion). Points are selected as the extrema of the saliency response and prove to balance well between density and informativeness. We provide an intuitive view of the detected points and visual comparisons against state-of-the-art space-time detectors. Our detector outperforms them on the KTH dataset using Nearest-Neighbor classifiers and ranks among the top using different classification frameworks. Statistics and comparisons are also performed on the more difficult Hollywood Human Actions (HOHA) dataset increasing the performance compared to current published results.
引用
收藏
页码:1454 / 1461
页数:8
相关论文
共 50 条
  • [31] Saliency-based similarity measure
    Dominguez, Sergio
    REVISTA IBEROAMERICANA DE AUTOMATICA E INFORMATICA INDUSTRIAL, 2012, 9 (04): : 359 - 370
  • [32] Saliency-based relief generation
    Wang, Meili
    Guo, Shihui
    Zhang, Hongming
    He, Dongjian
    Chang, Jian
    Zhang, Jian J.
    IETE TECHNICAL REVIEW, 2013, 30 (06) : 454 - 460
  • [33] Essential Keypoints to Enhance Visual Object Recognition with Saliency-based Metrics
    Trung-Nghia Le
    Yen-Thanh Le
    Minh-Triet Tran
    Anh-Duc Duong
    2014 13TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV), 2014, : 111 - 116
  • [34] VIDEO SALIENCY DETECTION BASED ON SPATIOTEMPORAL FEATURE LEARNING
    Lee, Se-Ho
    Kim, Jin-Hwan
    Choi, Kwang Pyo
    Sim, Jae-Young
    Kim, Chang-Su
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 1120 - 1124
  • [35] Saliency-Based Color Accessibility
    Tajima, Satohiro
    Komine, Kazuteru
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (03) : 1115 - 1126
  • [36] A Saliency-Based Patch Sampling Approach for Deep Artistic Media Recognition
    Yang, Heekyung
    Min, Kyungha
    ELECTRONICS, 2021, 10 (09)
  • [37] Breast Cancer Recognition Using Saliency-Based Spiking Neural Network
    Fu, Qiang
    Dong, Hongbin
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [38] Sparse embedding feature combination strategy for saliency-based visual attention system
    Zhao, Cairong
    Liu, Chuancai
    Journal of Computational Information Systems, 2010, 6 (09): : 2831 - 2838
  • [39] Robust Multi-feature Visual Tracking with a Saliency-based Target Descriptor
    Zhu Su
    Bo Yuming
    He Liang
    PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE 2016, 2016, : 5008 - 5013
  • [40] Saliency-based feature fusion convolutional network for blind image quality assessment
    Lili Shen
    Chuhe Zhang
    Chunping Hou
    Signal, Image and Video Processing, 2022, 16 : 419 - 427