The Role of the Input in Natural Language Video Description

被引:2
|
作者
Cascianelli, Silvia [1 ]
Costante, Gabriele [1 ]
Devo, Alessandro [1 ]
Ciarfuglia, Thomas A. [1 ]
Valigi, Paolo [1 ]
Fravolini, Mario L. [1 ]
机构
[1] Univ Perugia, Dept Engn, I-06123 Perugia, Italy
关键词
Video description; multimodal data; input preprocessing; IMAGE; ATTENTION; TEXT;
D O I
10.1109/TMM.2019.2924598
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Natural language video description (NLVD) has recently received strong interest in the computer vision, natural language processing (NLP), multimedia, and autonomous robotics communities. The state-of-the-art (SotA) approaches obtained remarkable results when tested on the benchmark datasets. However, those approaches poorly generalize to new datasets. In addition, none of the existing works focus on the processing of the input to the NLVD systems, which is both visual and textual. In this paper, an extensive study is presented to deal with the role of the visual input, evaluated with respect to the overall NLP performance. This is achieved by performing data augmentation of the visual component, applying common transformations to model camera distortions, noise, lighting, and camera positioning that are typical in real-world operative scenarios. A t-SNE-based analysis is proposed to evaluate the effects of the considered transformations on the overall visual data distribution. For this study, the English subset of the Microsoft Research Video Description (MSVD) dataset is considered, which is used commonly for NLVD. It was observed that this dataset contains a relevant amount of syntactic and semantic errors. These errors have been amended manually, and the new version of the dataset (called MSVD-v2) is used in the experimentation. The MSVD-v2 dataset is released to help to gain insight into the NLVD problem.
引用
收藏
页码:271 / 283
页数:13
相关论文
共 50 条
  • [41] A generic video parsing system with a scene description language (SDL)
    Gong, YH
    Chuan, CH
    Zhu, YW
    Sakauchi, M
    REAL-TIME IMAGING, 1996, 2 (01) : 45 - 59
  • [42] Visual and language semantic hybrid enhancement and complementary for video description
    Pengjie Tang
    Yunlan Tan
    Wenlang Luo
    Neural Computing and Applications, 2022, 34 : 5959 - 5977
  • [43] Traffic Sign Interpretation via Natural Language Description
    Yang, Chuang
    Zhuang, Kai
    Chen, Mulin
    Ma, Haozhao
    Han, Xu
    Han, Tao
    Guo, Changxing
    Han, Han
    Zhao, Bingxuan
    Wang, Qi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (11) : 18939 - 18953
  • [44] Parsing natural language text of use case description
    20151500725137
    (1) Department of Computer Science Engineering, K.L. University, Vijayawada, Andhra Pradesh, India, 1600, (Institute of Electrical and Electronics Engineers Inc., United States):
  • [45] Parsing Natural Language Text of Use Case Description
    Yalla, Prasanth
    Sharma, Nakul
    2014 CONFERENCE ON IT IN BUSINESS, INDUSTRY AND GOVERNMENT (CSIBIG), 2014,
  • [46] THE FORMAL DESCRIPTION OF NATURAL-LANGUAGE SENTENCE MEANING
    REDKO, LF
    VESTNIK LENINGRADSKOGO UNIVERSITETA SERIYA MATEMATIKA MEKHANIKA ASTRONOMIYA, 1985, (02): : 110 - 112
  • [47] Toward a Computational Approach for Natural Language Description of Emotions
    Kazemzadeh, Abe
    AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PT II, 2011, 6975 : 216 - 223
  • [48] Route description using natural language generation technology
    Zhang, XueYing
    INFORMATION RETRIEVAL TECHNOLOGY, 2008, 4993 : 454 - 459
  • [49] Research on Event and Its Description Model in Natural Language
    Wei, Xu
    Ke, Zhao
    Kai, Li
    ICCSIT 2010 - 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY, VOL 2, 2010, : 33 - 36
  • [50] Using natural language generation in automatic route description
    Dale, R
    Geldof, S
    Prost, JP
    JOURNAL OF RESEARCH AND PRACTICE IN INFORMATION TECHNOLOGY, 2005, 37 (01): : 89 - 105