A core region captioning framework for automatic video understanding in story video contents

被引:2
|
作者
Suh, Hyesun [1 ]
Kim, Jiyeon [2 ]
So, Jinsoo [3 ]
Jung, Jongjin [1 ]
机构
[1] Daejin Univ, Dept Comp Sci Engn, Pocheon Si, South Korea
[2] Daejin Univ, Dept Creat Future Talents, 1007 Hoguk Ro, Pocheon Si 11159, South Korea
[3] Informat & Commun Div, BigData Team, Goyang Si, South Korea
关键词
Core region detection algorithm; image captioning models; video scene analysis; proposed algorithm; DenseCap model;
D O I
10.1177/18479790221078130
中图分类号
F [经济];
学科分类号
02 ;
摘要
Due to the rapid increase in images and image data, research examining the visual analysis of such unstructured data has recently come to be actively conducted. One of the representative image caption models the DenseCap model extracts various regions in an image and generates region-level captions. However, since the existing DenseCap model does not consider priority for region captions, it is difficult to identify relatively significant region captions that best describe the image. There has also been a lack of research into captioning focusing on the core areas for story content, such as images in movies and dramas. In this study, we propose a new image captioning framework based on DenseCap that aims to promote the understanding of movies in particular. In addition, we design and implement a module for identifying characters so that the character information can be used in caption detection and caption improvement in core areas. We also propose a core area caption detection algorithm that considers the variables affecting the area caption importance. Finally, a performance evaluation is conducted to determine the accuracy of the character identification module, and the effectiveness of the proposed algorithm is demonstrated by visually comparing it with the existing DenseCap model.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Understanding temporal structure for video captioning
    Shagan Sah
    Thang Nguyen
    Ray Ptucha
    [J]. Pattern Analysis and Applications, 2020, 23 : 147 - 159
  • [2] Understanding temporal structure for video captioning
    Sah, Shagan
    Nguyen, Thang
    Ptucha, Ray
    [J]. PATTERN ANALYSIS AND APPLICATIONS, 2020, 23 (01) : 147 - 159
  • [3] An Efficient Framework for Dense Video Captioning
    Suin, Maitreya
    Rajagopalan, A. N.
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 12039 - 12046
  • [4] A Framework of Automatic Ontology Construction based on Scene Graph Generation Model for Analysis of Story Video Contents
    Division of AI Convergence, Daejin University, Korea, Republic of
    不详
    不详
    [J]. Trans. Korean Inst. Electr. Eng., 2022, 9 (1286-1292): : 1286 - 1292
  • [5] Video-understanding framework for automatic behavior recognition
    François Brémond
    Monique Thonnat
    Marcos Zúñiga
    [J]. Behavior Research Methods, 2006, 38 : 416 - 426
  • [6] Video-understanding framework for automatic behavior recognition
    Bremond, Francois
    Thonnat, Monique
    Zuniga, Marcos
    [J]. BEHAVIOR RESEARCH METHODS, 2006, 38 (03) : 416 - 426
  • [7] Attention based video captioning framework for Hindi
    Singh, Alok
    Singh, Thoudam Doren
    Bandyopadhyay, Sivaji
    [J]. MULTIMEDIA SYSTEMS, 2022, 28 (01) : 195 - 207
  • [8] Attention based video captioning framework for Hindi
    Alok Singh
    Thoudam Doren Singh
    Sivaji Bandyopadhyay
    [J]. Multimedia Systems, 2022, 28 : 195 - 207
  • [9] Evaluation of automatic video captioning using direct assessment
    Graham, Yvette
    Awad, George
    Smeaton, Alan
    [J]. PLOS ONE, 2018, 13 (09):
  • [10] Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning
    Zhang, Wei
    Wang, Bairui
    Ma, Lin
    Liu, Wei
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (12) : 3088 - 3101