Discovering attractive segments in the user-generated video streams

被引:27
|
作者
Wang, Zheng [1 ]
Zhou, Jie [1 ]
Ma, Jing [2 ]
Li, Jingjing [1 ]
Ai, Jiangbo [1 ]
Yang, Yang [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Sichuan, Peoples R China
[2] Chinese Univ Hong Kong, Hong Kong, Peoples R China
关键词
Time-sync comment; User-generated video stream; Segment popularity prediction; Video-to-text transfer;
D O I
10.1016/j.ipm.2019.102130
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of digital equipment and the continuous upgrading of online media, a growing number of people are willing to post videos on the web to share their daily lives (Jelodar, Paulius, & Sun, 2019). Generally, not all video segments are popular with audiences, some of which may be boring. If we can predict which segment in a newly generated video stream would be popular, the audiences can only enjoy this segment rather than watch the whole video to find the funny point. And if we can predict the emotions that the audiences would induce when they watch a video, this must be helpful for video analysis and for guiding the video-makers to improve their videos. In recent years, crowd-sourced time-sync video comments have emerged worldwide, supporting further research on temporal video labeling. In this paper, we propose a novel framework to achieve the following goal: Predicting which segment in a newly generated video stream (hasn't been commented with the time-sync comments) will be popular among the audiences. At last, experimental results on real-world data demonstrate the effectiveness of the proposed framework and justify the idea of predicting the popularities of segments in a video exploiting crowd-sourced time-sync comments as a bridge to analyze videos.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] User-generated video emotion recognition based on key frames
    Jie Wei
    Xinyu Yang
    Yizhuo Dong
    Multimedia Tools and Applications, 2021, 80 : 14343 - 14361
  • [22] User-generated video composition based on device context measurements
    Stohr D.
    Toteva I.
    Wilk S.
    Effelsberg W.
    Steinmetz R.
    International Journal of Semantic Computing, 2017, 11 (01) : 65 - 84
  • [23] User-generated content
    Greenfield, David
    CONTROL ENGINEERING, 2009, 56 (10) : 2 - 2
  • [24] User-generated content
    Wofford, Jennifer
    NEW MEDIA & SOCIETY, 2012, 14 (07) : 1236 - 1239
  • [25] User-Generated Evidence
    Hamilton, Rebecca J.
    COLUMBIA JOURNAL OF TRANSNATIONAL LAW, 2019, 57 (01): : 1 - 61
  • [26] Mapping segments accessing user-generated content and website applications in a joint space
    Kastner, Margit
    Stangl, Brigitte
    INTERNATIONAL JOURNAL OF CULTURE TOURISM AND HOSPITALITY RESEARCH, 2012, 6 (04) : 389 - 404
  • [27] Video Diffusion in User-generated Content Website: An empirical analysis of Bilibili
    Yan, Li
    Cha, Namjun
    Cho, Hosoo
    Hwang, Junseok
    2019 21ST INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): ICT FOR 4TH INDUSTRIAL REVOLUTION, 2019, : 81 - 84
  • [28] Evaluating video game moods and their separability based on user-generated reviews
    Cho, Hyerim
    Lee, Wan-Chen
    Thach, Heather
    Hirt, Juliana
    JOURNAL OF DOCUMENTATION, 2025, 81 (02) : 545 - 565
  • [29] A Novel Sub-Shot Segmentation Method for User-generated Video
    Lei, Zhuo
    Zhang, Qian
    Zheng, Chi
    Qiu, Guoping
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [30] Modeling the Evolution of User-generated Content on a Large Video Sharing Platform
    Mehrotra, Rishabh
    Bhattacharya, Prasanta
    WWW'15 COMPANION: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB, 2015, : 365 - 366