A Dynamic Frame Selection Framework for Fast Video Recognition

被引:24
|
作者
Wu, Zuxuan [1 ]
Li, Hengduo [2 ]
Xiong, Caiming [3 ]
Jiang, Yu-Gang [1 ]
Davis, Larry Steven [2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[3] Salesforce Res, Palo Alto, CA 94301 USA
关键词
Computational modeling; Three-dimensional displays; Video sequences; Two dimensional displays; Computational efficiency; Standards; Electronic mail; Video classification; conditional computation; deep neural networks; reinforcement learning;
D O I
10.1109/TPAMI.2020.3029425
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce AdaFrame, a conditional computation framework that adaptively selects relevant frames on a per-input basis for fast video recognition. AdaFrame, which contains a Long Short-Term Memory augmented with a global memory to provide context information, operates as an agent to interact with video sequences aiming to search over time which frames to use. Trained with policy search methods, at each time step, AdaFrame computes a prediction, decides where to observe next, and estimates a utility, i.e., expected future rewards, of viewing more frames in the future. Exploring predicted utilities at testing time, AdaFrame is able to achieve adaptive lookahead inference so as to minimize the overall computational cost without incurring a degradation in accuracy. We conduct extensive experiments on two large-scale video benchmarks, FCVID and ActivityNet. With a vanilla ResNet-101 model, AdaFrame achieves similar performance of using all frames while only requiring, on average, 8.21 and 8.65 frames on FCVID and ActivityNet, respectively. We also demonstrate AdaFrame is compatible with modern 2D and 3D networks for video recognition. Furthermore, we show, among other things, learned frame usage can reflect the difficulty of making prediction decisions both at instance-level within the same class and at class-level among different categories.
引用
收藏
页码:1699 / 1711
页数:13
相关论文
共 50 条
  • [1] AdaFrame: Adaptive Frame Selection for Fast Video Recognition
    Wu, Zuxuan
    Xiong, Caiming
    Ma, Chih-Yao
    Socher, Richard
    Davis, Larry S.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1278 - 1287
  • [2] On frame selection for video face recognition
    Dhamecha, Tejas I. (tejasd@iiitd.ac.in), 2016, Springer International Publishing
  • [3] Quality Based Frame Selection For Video Face Recognition
    Anantharajah, Kaneswaran
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    Tjondronegoro, Dian
    6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS'2012), 2012,
  • [4] A novel key-frame selection-based sign language recognition framework for the video data
    Mangla, Fakhar Ullah
    Bashir, Aysha
    Lali, Ikram
    Bukhari, Ahmad Chan
    Shahzad, Basit
    IMAGING SCIENCE JOURNAL, 2020, 68 (03): : 156 - 169
  • [5] Frame Selection for Dynamic Caching Adjustment in Video Proxy Servers
    Wei-Hsiu Ma
    David H.C. Du
    Multimedia Tools and Applications, 2004, 22 : 53 - 73
  • [6] Frame selection for dynamic caching adjustment in video proxy servers
    Ma, WH
    Du, DHC
    MULTIMEDIA TOOLS AND APPLICATIONS, 2004, 22 (01) : 53 - 73
  • [7] Multimodal emotion recognition based on peak frame selection from video
    Zhalehpour, Sara
    Akhtar, Zahid
    Erdem, Cigdem Eroglu
    SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (05) : 827 - 834
  • [8] TEMPORALLY CONSISTENT KEY FRAME SELECTION FROM VIDEO FOR FACE RECOGNITION
    Saeed, Usman
    Dugelay, Jean-Luc
    18TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO-2010), 2010, : 1311 - 1315
  • [9] Efficient video face recognition based on frame selection and quality assessment
    Kharchevnikova, Angelina
    Savchenko, Andrey, V
    PEERJ COMPUTER SCIENCE, 2021,
  • [10] Multimodal emotion recognition based on peak frame selection from video
    Sara Zhalehpour
    Zahid Akhtar
    Cigdem Eroglu Erdem
    Signal, Image and Video Processing, 2016, 10 : 827 - 834