Self-Supervised Visual Representations for Cross-Modal Retrieval

被引:7
|
作者
Patel, Yash [1 ]
Gomez, Lluis [2 ]
Rusinol, Marcal [2 ]
Karatzas, Dimosthenis [2 ]
Jawahar, C., V [3 ]
机构
[1] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
[2] Univ Autonoma Barcelona, Comp Vis Ctr, Barcelona, Spain
[3] IIIT Hyderabad, CVIT, KCIS, Hyderabad, India
关键词
Self-Supervised Learning; Visual Representations; Cross-Modal Retrieval;
D O I
10.1145/3323873.3325035
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.
引用
收藏
页码:182 / 186
页数:5
相关论文
共 50 条
  • [1] Self-supervised cross-modal visual retrieval from brain activities
    Ye, Zesheng
    Yao, Lina
    Zhang, Yu
    Gustin, Sylvia
    [J]. PATTERN RECOGNITION, 2024, 145
  • [2] Self-Supervised Correlation Learning for Cross-Modal Retrieval
    Liu, Yaxin
    Wu, Jianlong
    Qu, Leigang
    Gan, Tian
    Yin, Jianhua
    Nie, Liqiang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 2851 - 2863
  • [3] Perfect Match: Self-Supervised Embeddings for Cross-Modal Retrieval
    Chung, Soo-Whan
    Chung, Joon Son
    Kang, Hong-Goo
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (03) : 568 - 576
  • [4] Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval
    Li, Chao
    Deng, Cheng
    Li, Ning
    Liu, Wei
    Gao, Xinbo
    Tao, Dacheng
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4242 - 4251
  • [5] Autoencoder-based self-supervised hashing for cross-modal retrieval
    Li, Yifan
    Wang, Xuan
    Cui, Lei
    Zhang, Jiajia
    Huang, Chengkai
    Luo, Xuan
    Qi, Shuhan
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 17257 - 17274
  • [6] Autoencoder-based self-supervised hashing for cross-modal retrieval
    Yifan Li
    Xuan Wang
    Lei Cui
    Jiajia Zhang
    Chengkai Huang
    Xuan Luo
    Shuhan Qi
    [J]. Multimedia Tools and Applications, 2021, 80 : 17257 - 17274
  • [7] Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning
    Salvador, Amaia
    Gundogdu, Erhan
    Bazzani, Loris
    Donoser, Michael
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15470 - 15479
  • [8] Self-supervised deep semantics-preserving Hashing for cross-modal retrieval
    Lu, Bo
    Duan, Xiaodong
    Yuan, Ye
    [J]. Qinghua Daxue Xuebao/Journal of Tsinghua University, 2022, 62 (09): : 1442 - 1449
  • [9] A NOVEL SELF-SUPERVISED CROSS-MODAL IMAGE RETRIEVAL METHOD IN REMOTE SENSING
    Sumbul, Gencer
    Mueller, Markus
    Demir, Beguem
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2426 - 2430
  • [10] Self-Supervised Cluster-Contrast Distillation Hashing Network for Cross-Modal Retrieval
    Sun, Haoxuan
    Cao, Yudong
    Liu, Guangyuan
    [J]. IEEE ACCESS, 2023, 11 : 96584 - 96593