Learning Text-to-Video Retrieval from Image Captioning

被引:0
|
作者
Lucas Ventura [1 ]
Cordelia Schmid [2 ]
Gül Varol [2 ]
机构
[1] Univ Gustave Eiffel,LIGM, École des Ponts, CNRS
[2] PSL Research University,Inria, ENS, CNRS
关键词
Text-to-video retrieval; Image captioning; Multimodal learning;
D O I
10.1007/s11263-024-02202-8
中图分类号
学科分类号
摘要
We describe a protocol to study text-to-video retrieval training with unlabeled videos, where we assume (i) no access to labels for any videos, i.e., no access to the set of ground-truth captions, but (ii) access to labeled images in the form of text. Using image expert models is a realistic scenario given that annotating images is cheaper therefore scalable, in contrast to expensive video labeling schemes. Recently, zero-shot image experts such as CLIP have established a new strong baseline for video understanding tasks. In this paper, we make use of this progress and instantiate the image experts from two types of models: a text-to-image retrieval model to provide an initial backbone, and image captioning models to provide supervision signal into unlabeled videos. We show that automatically labeling video frames with image captioning allows text-to-video retrieval training. This process adapts the features to the target domain at no manual annotation cost, consequently outperforming the strong zero-shot CLIP baseline. During training, we sample captions from multiple video frames that best match the visual content, and perform a temporal pooling over frame representations by scoring frames according to their relevance to each caption. We conduct extensive ablations to provide insights and demonstrate the effectiveness of this simple framework by outperforming the CLIP zero-shot baselines on text-to-video retrieval on three standard datasets, namely ActivityNet, MSR-VTT, and MSVD. Code and models will be made publicly available.
引用
收藏
页码:1834 / 1854
页数:20
相关论文
共 50 条
  • [41] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation
    Sarto, Sara
    Barraco, Manuele
    Cornia, Marcella
    Baraldi, Lorenzo
    Cucchiara, Rita
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6914 - 6924
  • [42] Content based image and video retrieval using embedded text
    Misra, C
    Sural, S
    COMPUTER VISION - ACCV 2006, PT II, 2006, 3852 : 111 - 120
  • [43] Towards Unified Deep Learning Model for NSFW Image and Video Captioning
    Ko, Jong-Won
    Hwang, Dong-Hyun
    ADVANCED MULTIMEDIA AND UBIQUITOUS ENGINEERING, MUE/FUTURETECH 2018, 2019, 518 : 57 - 63
  • [44] A review of text and image retrieval approaches for broadcast news video
    Rong Yan
    Alexander G. Hauptmann
    Information Retrieval, 2007, 10 : 445 - 484
  • [45] A review of text and image retrieval approaches for broadcast news video
    Yan, Rong
    Hauptmann, Alexander G.
    INFORMATION RETRIEVAL, 2007, 10 (4-5): : 445 - 484
  • [46] Video captioning with global and local text attention
    Peng, Yuqing
    Wang, Chenxi
    Pei, Yixin
    Li, Yingjun
    VISUAL COMPUTER, 2022, 38 (12): : 4267 - 4278
  • [47] Video captioning with global and local text attention
    Yuqing Peng
    Chenxi Wang
    Yixin Pei
    Yingjun Li
    The Visual Computer, 2022, 38 : 4267 - 4278
  • [48] Video Paragraph Captioning as a Text Summarization Task
    Liu, Hui
    Wan, Xiaojun
    ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 2, 2021, : 55 - 60
  • [49] Improving distinctiveness in video captioning with text-video similarity
    Velda, Vania
    Immanuel, Steve Andreas
    Hendria, Willy Fitra
    Jeong, Cheol
    IMAGE AND VISION COMPUTING, 2023, 136
  • [50] Efficient text-to-video retrieval via multi-modal multi-tagger derived pre-screening
    Yingjia Xu
    Mengxia Wu
    Zixin Guo
    Min Cao
    Mang Ye
    Jorma Laaksonen
    Visual Intelligence, 2025, 3 (1):