Sharing video annotations

被引:0
|
作者
Caspi, Y [1 ]
Bargeron, D [1 ]
机构
[1] Hebrew Univ Jerusalem, Jerusalem, Israel
关键词
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This paper describes an approach for locating annotations generated in one video and properly placing them in a second, modified version of the same video. We focus on modifications that standard Television (TV) broadcasts may experience, including content insertion and deletion (i.e., commercials), format conversions, and different start/end recording times. To overcome these modifications, we propose content-based video timelines. We identify the position of a frame in a long video stream with an accuracy of one frame based on its content, without using, embedded time codes. To make this approach feasible, we use a compact representation of a video frame which we call a "fingerprint." Fingerprints capture small temporal variations within shots, and therefore allow precise position recovery. Our fingerprints' efficient storage size, extraction time, and comparison complexity suggest that our approach can be applied using off-the-shelf PCs and TV "set-top" boxes.
引用
收藏
页码:2227 / 2230
页数:4
相关论文
共 50 条
  • [1] Modeling the dance video annotations
    Ramadoss, Balakrishnan
    Rajkumar, Kannan
    [J]. 2006 1ST INTERNATIONAL CONFERENCE ON DIGITAL INFORMATION MANAGEMENT, 2006, : 145 - +
  • [2] In-Context Annotations for Refinding and Sharing
    Kawase, Ricardo
    Herder, Eelco
    Papadakis, George
    Nejdl, Wolfgang
    [J]. WEB INFORMATION SYSTEMS AND TECHNOLOGIES, 2011, 75 : 85 - 100
  • [3] LabelMe video: Building a Video Database with Human Annotations
    Yuen, Jenny
    Russell, Bryan
    Liu, Ce
    Torralba, Antonio
    [J]. 2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, : 1451 - 1458
  • [4] Temporally stable video segmentation without video annotations
    Azulay, Aharon
    Halperin, Tavi
    Vantzos, Orestis
    Bornstein, Nadav
    Bibi, Ofir
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1919 - 1928
  • [5] Analysis of video guides with multimedia annotations
    Ruiz Rey, Francisco J.
    Cebrian Robles, Violeta
    Cebrian de la Serna, Manuel
    [J]. CAMPUS VIRTUALES, 2021, 10 (02): : 97 - 109
  • [6] Markup upon Video - towards Dynamic and Interactive Video Annotations
    Schultes, Peter
    Lehner, Franz
    Kosch, Harald
    [J]. JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2011, 17 (04) : 605 - 617
  • [7] Sharing annotations better: RESTful Open Annotation
    Pyysalo, Sampo
    Campos, Jorge
    Cejuela, Juan Miguel
    Ginter, Filip
    Hakala, Kai
    Li, Chen
    Stenetorp, Pontus
    Jensen, Lars Juhl
    [J]. PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2015): SYSTEM DEMONSTRATIONS, 2015, : 91 - 96
  • [8] Learning Physics Through Online Video Annotations
    Marcal, J.
    Borges, M. M.
    Viana, P.
    Carvalho, P.
    [J]. EDUCATION IN THE KNOWLEDGE SOCIETY, 2020, 21
  • [9] Literature review on video annotations in teacher education
    Cebrian-Robles, Violeta
    Perez-Torregrosa, Ana-Belen
    de la Serna, Manuel Cebrian
    [J]. PIXEL-BIT- REVISTA DE MEDIOS Y EDUCACION, 2023, (66): : 31 - 57
  • [10] Learning to Track Instances without Video Annotations
    Fu, Yang
    Liu, Sifei
    Iqbal, Umar
    De Mello, Shalini
    Shi, Humphrey
    Kautz, Jan
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8676 - 8685