An Action Recognition Method Using Saliency Detection

被引:1
|
作者
Wang X. [1 ,2 ]
Qi C. [1 ]
机构
[1] School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an
[2] School of Electrical Engineering and Automation, Qilu University of Technology(Shandong Academy of Sciences), Jinan
来源
Qi, Chun | 2018年 / Xi'an Jiaotong University卷 / 52期
关键词
Action recognition; Low-rank matrix recovery; Saliency detection; Sparse representation;
D O I
10.7652/xjtuxb201802004
中图分类号
学科分类号
摘要
An action recognition method based on saliency detection is proposed to address the problem that the traditional dense trajectory method for action recognition does not discriminate action-related areas and background. A video is temporally split into several short sub-videos in considering the problem that the video saliency does not change considerably in a small spatial-temporal region, and each short sub-video is further spatially divided into small patches. Then a two-stage saliency detection method is used based on patches to obtain the action-related areas in each sub-video. A low-rank matrix recovery algorithm is applied to the motion information of a sub-video to calculate the initial saliency of the sub-video and then the value of the initial saliency is used to classify all the patches in the sub-video into a candidate foreground set and an absolute background set in the first stage of detection. In the second stage, weighted sparse representation algorithm based on the dictionary constructed by the motion of patches in the absolute background set is used to compute the refined saliency of each patch in the sub-video and to separate the true action-related areas from the candidate foreground set. A binary saliency map is generated by thresholding to indicate action-related areas. Finally, the trajectories in action-related areas are extracted for action recognition by incorporating the saliency map into dense tracking. Experimental results on benchmark datasets show that the proposed method detects the action-related areas in videos well and the recognition rate is 2.5%-4.5% higher than that of the dense trajectory. © 2018, Editorial Office of Journal of Xi'an Jiaotong University. All right reserved.
引用
收藏
页码:24 / 29and44
页数:2920
相关论文
共 19 条
  • [1] Wang H., Klaser A., Schmid C., Et al., Dense trajectories and motion boundary descriptors for action recognition, International Journal of Computer Vision, 103, pp. 60-79, (2013)
  • [2] Wang H., Schmid C., Action recognition with improved trajectories, Proceedings of IEEE International Conference on Computer Vision, pp. 3551-3558, (2013)
  • [3] Jain M., Jegou H., Bouthemy P., Better exploiting motion for better action recognition, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2555-2562, (2013)
  • [4] Wang X., Qi C., Saliency-based dense trajectories for action recognition using low-rank matrix decomposition, Journal of Visual Communication & Image Representation, 47, pp. 361-374, (2016)
  • [5] Vig E., Dorr M., Cox D., Space-variant descriptor sampling for action recognition based on saliency and eye movements, Proceedings of 12th European Conference on Computer Vision, pp. 84-97, (2012)
  • [6] Somasundaram G., Cherian A., Morellas V., Et al., Action recognition using global spatio-temporal features derived from sparse representations, Computer Vision and Image Understanding, 123, 7, pp. 1-13, (2014)
  • [7] Fang Z., Cui R., Jin J., Video saliency detection algorithm based on biological visual feature and visual psychology theory, Acta Physica Sinica, 66, 10, pp. 319-332, (2017)
  • [8] Liu Z., Li J., Ye L., Et al., Saliency detection for unconstrained videos using superpixel-level graph and spatiotemporal propagation, IEEE Transactions on Circuits & Systems for Video Technology, 27, 12, pp. 2527-2542, (2017)
  • [9] Chen C.A., Wu X.F., Wang B., Et al., Video saliency detection using dynamic fusion of spatial-temporal features in complex background with disturbance, Journal of Computer-Aided Design & Computer Graphics, 28, 5, pp. 802-812, (2016)
  • [10] Candes E.J., Li X., Ma Y., Et al., Robust principal component analysis?, Journal of the ACM, 58, 3, (2011)