Self-Supervised Visual Representations for Cross-Modal Retrieval

被引:7
|
作者
Patel, Yash [1 ]
Gomez, Lluis [2 ]
Rusinol, Marcal [2 ]
Karatzas, Dimosthenis [2 ]
Jawahar, C., V [3 ]
机构
[1] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
[2] Univ Autonoma Barcelona, Comp Vis Ctr, Barcelona, Spain
[3] IIIT Hyderabad, CVIT, KCIS, Hyderabad, India
关键词
Self-Supervised Learning; Visual Representations; Cross-Modal Retrieval;
D O I
10.1145/3323873.3325035
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.
引用
收藏
页码:182 / 186
页数:5
相关论文
共 50 条
  • [21] Self-Supervised Learning by Cross-Modal Audio-Video Clustering
    Alwassel, Humam
    Mahajan, Dhruv
    Korbar, Bruno
    Torresani, Lorenzo
    Ghanem, Bernard
    Tran, Du
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [22] A semi-supervised cross-modal memory bank for cross-modal retrieval
    Huang, Yingying
    Hu, Bingliang
    Zhang, Yipeng
    Gao, Chi
    Wang, Quan
    [J]. NEUROCOMPUTING, 2024, 579
  • [23] Federated learning for supervised cross-modal retrieval
    Li, Ang
    Li, Yawen
    Shao, Yingxia
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2024, 27 (04):
  • [24] Self-Supervised Intra-Modal and Cross-Modal Contrastive Learning for Point Cloud Understanding
    Wu, Yue
    Liu, Jiaming
    Gong, Maoguo
    Gong, Peiran
    Fan, Xiaolong
    Qin, A. K.
    Miao, Qiguang
    Ma, Wenping
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 1626 - 1638
  • [25] Learning Mutual Modulation for Self-supervised Cross-Modal Super-Resolution
    Dong, Xiaoyu
    Yokoya, Naoto
    Wang, Longguang
    Uezato, Tatsumi
    [J]. COMPUTER VISION, ECCV 2022, PT XIX, 2022, 13679 : 1 - 18
  • [26] Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search
    Liang, Meiyu
    Du, Junping
    Liang, Zhengyang
    Xing, Yongwang
    Huang, Wei
    Xue, Zhe
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13744 - 13753
  • [27] Supervised Hierarchical Online Hashing for Cross-modal Retrieval
    Han, Kai
    Liu, Yu
    Wei, Rukai
    Zhou, Ke
    Xu, Jinhui
    Long, Kun
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (04)
  • [28] Supervised Contrastive Discrete Hashing for cross-modal retrieval
    Li, Ze
    Yao, Tao
    Wang, Lili
    Li, Ying
    Wang, Gang
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 295
  • [29] SEMANTICALLY SUPERVISED MAXIMAL CORRELATION FOR CROSS-MODAL RETRIEVAL
    Li, Mingyang
    Li, Yang
    Huang, Shao-Lun
    Zhang, Lin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2291 - 2295
  • [30] Supervised Matrix Factorization Hashing for Cross-Modal Retrieval
    Tang, Jun
    Wang, Ke
    Shao, Ling
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (07) : 3157 - 3166