Predicting Important Objects for Egocentric Video Summarization

被引:0
|
作者
Yong Jae Lee
Kristen Grauman
机构
[1] University of California,Department of Computer Science
[2] University of Texas at Austin,Department of Computer Science
来源
关键词
Egocentric vision; Video summarization; Category discovery; Saliency detection;
D O I
暂无
中图分类号
学科分类号
摘要
We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer’s day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video—such as the nearness to hands, gaze, and frequency of occurrence—and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method’s promise relative to existing techniques for saliency and summarization.
引用
收藏
页码:38 / 55
页数:17
相关论文
共 50 条
  • [1] Predicting Important Objects for Egocentric Video Summarization
    Lee, Yong Jae
    Grauman, Kristen
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 114 (01) : 38 - 55
  • [2] Discovering Important People and Objects for Egocentric Video Summarization
    Lee, Yong Jae
    Ghosh, Joydeep
    Grauman, Kristen
    2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 1346 - 1353
  • [3] Object Triggered Egocentric Video Summarization
    Jain, Samriddhi
    Rameshan, Renu M.
    Nigam, Aditya
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS: 17TH INTERNATIONAL CONFERENCE, CAIP 2017, PT II, 2017, 10425 : 428 - 439
  • [4] Personalized Egocentric Video Summarization for Cultural Experience
    Varini, Patrizia
    Serra, Giuseppe
    Cucchiara, Rita
    ICMR'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2015, : 539 - 542
  • [5] Story-Driven Summarization for Egocentric Video
    Lu, Zheng
    Grauman, Kristen
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 2714 - 2721
  • [6] Spatial and temporal scoring for egocentric video summarization
    Guo, Zhao
    Gao, Lianli
    Zhen, Xiantong
    Zou, Fuhao
    Shen, Fumin
    Zheng, Kai
    NEUROCOMPUTING, 2016, 208 : 299 - 308
  • [7] Shot Level Egocentric Video Co-summarization
    Sahu, Abhimanyu
    Chowdhury, Ananda S.
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2887 - 2892
  • [8] WEARABLE SOCIAL CAMERA: EGOCENTRIC VIDEO SUMMARIZATION FOR SOCIAL INTERACTION
    Yang, Jen-An
    Lee, Chia-Han
    Yang, Shao-Wen
    Somayazulu, V. Srinivasa
    Chen, Yen-Kuang
    Chien, Shao-Yi
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2016,
  • [9] Egocentric Video Summarization of Cultural Tour based on User Preferences
    Varini, Patrizia
    Serra, Giuseppe
    Cucchiara, Rita
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 931 - 934
  • [10] A hybrid egocentric video summarization method to improve the healthcare for Alzheimer patients
    Sultan, Saba
    Javed, Ali
    Irtaza, Aun
    Dawood, Hassan
    Dawood, Hussain
    Bashir, Ali Kashif
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2019, 10 (10) : 4197 - 4206