SMART Frame Selection for Action Recognition

被引:0
|
作者
Gowda, Shreyank N. [1 ]
Rohrbach, Marcus [2 ]
Sevilla-Lara, Laura [1 ]
机构
[1] Univ Edinburgh, Edinburgh, Midlothian, Scotland
[2] Facebook AI Res, Menlo Pk, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Action recognition is computationally expensive. In this paper, we address the problem of frame selection to improve the accuracy of action recognition. In particular, we show that selecting good frames helps in action recognition performance even in the trimmed videos domain. Recent work has successfully leveraged frame selection for long, untrimmed videos, where much of the content is not relevant, and easy to discard. In this work, however, we focus on the more standard short, trimmed action recognition problem. We argue that good frame selection can not only reduce the computational cost of action recognition but also increase the accuracy by getting rid of frames that are hard to classify. In contrast to previous work, we propose a method that instead of selecting frames by considering one at a time, considers them jointly. This results in a more efficient selection, where "good" frames are more effectively distributed over the video, like snapshots that tell a story. We call the proposed frame selection SMART and we test it in combination with different backbone architectures and on multiple benchmarks (Kinetics, Something-something, UCF101). We show that the SMART frame selection consistently improves the accuracy compared to other frame selection strategies while reducing the computational cost by a factor of 4 to 10 times. We also show that when the primary goal is recognition performance, our selection strategy can improve over recent state-of-the-art models and frame selection strategies on various benchmarks ( UCF101, HMDB51, FCVID, and ActivityNet).
引用
收藏
页码:1451 / 1459
页数:9
相关论文
共 50 条
  • [31] EMO-MoviNet: Enhancing Action Recognition in Videos with EvoNorm, Mish Activation, and Optimal Frame Selection for Efficient Mobile Deployment
    Hussain, Tarique
    Memon, Zulfiqar Ali
    Qureshi, Rizwan
    Alam, Tanvir
    SENSORS, 2023, 23 (19)
  • [32] AN UNSUPERVISED FRAME SELECTION TECHNIQUE FOR ROBUST EMOTION RECOGNITION IN NOISY SPEECH
    Pandharipande, Meghna
    Chakraborty, Rupayan
    Panda, Ashish
    Kopparapu, Sunil Kumar
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 2055 - 2059
  • [33] Multimodal emotion recognition based on peak frame selection from video
    Zhalehpour, Sara
    Akhtar, Zahid
    Erdem, Cigdem Eroglu
    SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (05) : 827 - 834
  • [34] A key frame selection-based facial expression recognition system
    Guo, S. M.
    Pan, Y. A.
    Liao, Y. C.
    Hsu, C. Y.
    Tsai, J. S. H.
    Chang, C. I.
    ICICIC 2006: FIRST INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING, INFORMATION AND CONTROL, VOL 3, PROCEEDINGS, 2006, : 341 - +
  • [35] TEMPORALLY CONSISTENT KEY FRAME SELECTION FROM VIDEO FOR FACE RECOGNITION
    Saeed, Usman
    Dugelay, Jean-Luc
    18TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO-2010), 2010, : 1311 - 1315
  • [36] Efficient video face recognition based on frame selection and quality assessment
    Kharchevnikova, Angelina
    Savchenko, Andrey, V
    PEERJ COMPUTER SCIENCE, 2021,
  • [37] Multimodal emotion recognition based on peak frame selection from video
    Sara Zhalehpour
    Zahid Akhtar
    Cigdem Eroglu Erdem
    Signal, Image and Video Processing, 2016, 10 : 827 - 834
  • [38] Efficient video face recognition based on frame selection and quality assessment
    Kharchevnikova A.
    Savchenko A.V.
    PeerJ Computer Science, 2021, 7 : 1 - 18
  • [39] Learning discriminative features for fast frame-based action recognition
    Wang, Liang
    Wang, Yizhou
    Jiang, Tingting
    Zhao, Debin
    Gao, Wen
    PATTERN RECOGNITION, 2013, 46 (07) : 1832 - 1840
  • [40] A fast human action recognition algorithm based on key frame pinning
    Tan, Hai
    Xu, Yin
    Sun, Gangbo
    Lei, Bo
    REAL-TIME PHOTONIC MEASUREMENTS, DATA MANAGEMENT, AND PROCESSING III, 2019, 10822