Automatic Editing of Footage from Multiple Social Cameras

被引:83
|
作者
Arev, Ido [1 ,3 ]
Park, Hyun Soo [2 ]
Sheikh, Yaser [2 ,3 ]
Hodgins, Jessica [2 ,3 ]
Shamir, Ariel [1 ,3 ]
机构
[1] Interdisciplinary Ctr Herzliya, Herzliyya, Israel
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Disney Res Pittsburgh, Pittsburgh, PA USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2014年 / 33卷 / 04期
关键词
Multiple Cameras; Video Editing;
D O I
10.1145/2601097.2601198
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present an approach that takes multiple videos captured by social cameras-cameras that are carried or worn by members of the group involved in an activity-and produces a coherent "cut" video of the activity. Footage from social cameras contains an intimate, personalized view that reflects the part of an event that was of importance to the camera operator (or wearer). We leverage the insight that social cameras share the focus of attention of the people carrying them. We use this insight to determine where the important "content" in a scene is taking place, and use it in conjunction with cinematographic guidelines to select which cameras to cut to and to determine the timing of those cuts. A trellis graph representation is used to optimize an objective function that maximizes coverage of the important content in the scene, while respecting cinematographic guidelines such as the 180-degree rule and avoiding jump cuts. We demonstrate cuts of the videos in various styles and lengths for a number of scenarios, including sports games, street performances, family activities, and social get-togethers. We evaluate our results through an in-depth analysis of the cuts in the resulting videos and through comparison with videos produced by a professional editor and existing commercial solutions.
引用
下载
收藏
页数:11
相关论文
共 50 条
  • [31] Automatic Estimation of Sphere Centers from Images of Calibrated Cameras
    Hajder, Levente
    Toth, Tekla
    Pusztai, Zoltan
    VISAPP: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 4: VISAPP, 2020, : 490 - 497
  • [32] Out of the shadows: automatic fish detection from acoustic cameras
    Connolly, R. M.
    Jinks, K., I
    Shand, A.
    Taylor, M. D.
    Gaston, T. F.
    Becker, A.
    Jinks, E. L.
    AQUATIC ECOLOGY, 2023, 57 (04) : 833 - 844
  • [33] Automatic collection of fuel prices from a network of mobile cameras
    Dong, Y. F.
    Kanhere, S.
    Chou, C. T.
    Bulusu, N.
    DISTRIBUTED COMPUTING IN SENSOR SYSTEMS, 2008, 5067 : 140 - +
  • [34] Fast Automatic Detection of Wildlife in Images from Trap Cameras
    Figueroa, Karina
    Camarena-Ibarrola, Antonio
    Garcia, Jonathan
    Tejeda Villela, Hector
    PROGRESS IN PATTERN RECOGNITION IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2014, 2014, 8827 : 940 - 947
  • [35] AUTOMATIC THEATRE CAMERAS - A REVIEW
    FLETCHER, RT
    MEDICAL AND BIOLOGICAL ILLUSTRATION, 1969, S 19 : S36 - &
  • [36] Out of the shadows: automatic fish detection from acoustic cameras
    R. M. Connolly
    K. I. Jinks
    A. Shand
    M. D. Taylor
    T. F. Gaston
    A. Becker
    E. L. Jinks
    Aquatic Ecology, 2023, 57 : 833 - 844
  • [37] AUTOMATIC EDITING OF THESAURUS
    AKHMEDZHANOV, MS
    GELFMAN, GS
    KOROLEV, EI
    NAUCHNO-TEKHNICHESKAYA INFORMATSIYA SERIYA 2-INFORMATSIONNYE PROTSESSY I SISTEMY, 1989, (02): : 35 - 41
  • [38] Intra-Diegetic Cameras as Cinematic Actor Assemblages in Found Footage Horror Cinema
    Rodje, Kjetl
    FILM-PHILOSOPHY, 2017, 21 (02): : 206 - 222
  • [39] Automatic Detection of Construction Work using Surveillance Video Footage
    Soroka, M. T.
    Mita, A.
    Kume, T.
    PROCEEDINGS OF THE FOURTH EUROPEAN WORKSHOP ON STRUCTURAL HEALTH MONITORING 2008, 2008, : 1287 - 1287
  • [40] Automatic Fish Recognition and Counting in Video Footage of Fishery Operations
    Luo, Suhuai
    Li, Xuechen
    Wang, Dadong
    Li, Jiaming
    Sun, Changming
    2015 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMMUNICATION NETWORKS (CICN), 2015, : 296 - 299