The Role of the Input in Natural Language Video Description

被引:2
|
作者
Cascianelli, Silvia [1 ]
Costante, Gabriele [1 ]
Devo, Alessandro [1 ]
Ciarfuglia, Thomas A. [1 ]
Valigi, Paolo [1 ]
Fravolini, Mario L. [1 ]
机构
[1] Univ Perugia, Dept Engn, I-06123 Perugia, Italy
关键词
Video description; multimodal data; input preprocessing; IMAGE; ATTENTION; TEXT;
D O I
10.1109/TMM.2019.2924598
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Natural language video description (NLVD) has recently received strong interest in the computer vision, natural language processing (NLP), multimedia, and autonomous robotics communities. The state-of-the-art (SotA) approaches obtained remarkable results when tested on the benchmark datasets. However, those approaches poorly generalize to new datasets. In addition, none of the existing works focus on the processing of the input to the NLVD systems, which is both visual and textual. In this paper, an extensive study is presented to deal with the role of the visual input, evaluated with respect to the overall NLP performance. This is achieved by performing data augmentation of the visual component, applying common transformations to model camera distortions, noise, lighting, and camera positioning that are typical in real-world operative scenarios. A t-SNE-based analysis is proposed to evaluate the effects of the considered transformations on the overall visual data distribution. For this study, the English subset of the Microsoft Research Video Description (MSVD) dataset is considered, which is used commonly for NLVD. It was observed that this dataset contains a relevant amount of syntactic and semantic errors. These errors have been amended manually, and the new version of the dataset (called MSVD-v2) is used in the experimentation. The MSVD-v2 dataset is released to help to gain insight into the NLVD problem.
引用
收藏
页码:271 / 283
页数:13
相关论文
共 50 条
  • [1] Towards Coherent Natural Language Description of Video Streams
    Khan, Muhammad Usman Ghani
    Zhang, Lei
    Gotoh, Yoshihiko
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [2] Video Scene Classification based on Natural Language Description
    Zhang, Lei
    Khan, Muhammad Usman Ghani
    Gotoh, Yoshihiko
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [3] Generating natural language description of human behavior from video images
    Kojima, A
    Izumi, M
    Tamura, T
    Fukunaga, K
    15TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, PROCEEDINGS: APPLICATIONS, ROBOTICS SYSTEMS AND ARCHITECTURES, 2000, : 728 - 731
  • [4] Full-GRU Natural Language Video Description for Service Robotics Applications
    Cascianelli, Silvia
    Costante, Gabriele
    Ciarfuglia, Thomas A.
    Valigi, Paolo
    Fravolini, Mario L.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (02): : 841 - 848
  • [5] From Video to Language: Survey of Video Captioning and Description
    Tang P.-J.
    Wang H.-L.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (02): : 375 - 397
  • [6] An Extensible Description Language for Video Games
    Schaul, Tom
    IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2014, 6 (04) : 325 - 331
  • [7] Natural Language Description of Video Streams Using Task-Specific Feature Encoding
    Dilawari, Aniqa
    Khan, Muhammad Usman Ghani
    Farooq, Ammarah
    Zahoor-Ur-Rehman
    Rho, Seungmin
    Mehmood, Irfan
    IEEE ACCESS, 2018, 6 : 16639 - 16645
  • [8] ALGORITHMIC DESCRIPTION OF NATURAL LANGUAGE
    LONGUETH.HC
    PROCEEDINGS OF THE ROYAL SOCIETY SERIES B-BIOLOGICAL SCIENCES, 1972, 182 (1068): : 255 - +
  • [9] VIDEO CLIPS, INPUT PROCESSING AND LANGUAGE LEARNING
    Tschirner, Erwin
    MEDIA IN FOREIGN LANGUAGE TEACHING AND LEARNING, 2011, 5 : 25 - 42
  • [10] Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions
    Atsuhiro Kojima
    Takeshi Tamura
    Kunio Fukunaga
    International Journal of Computer Vision, 2002, 50 : 171 - 184