Kalman contrastive unsupervised representation learning

被引:0
|
作者
Mohammad Mahdi Jahani Yekta [1 ]
机构
[1] Stanford University,Department of Computer Science
关键词
Contrastive unsupervised learning; Dictionary building; Kalman filter; MoCo; Regularized optimization;
D O I
10.1038/s41598-024-76085-7
中图分类号
学科分类号
摘要
We first propose a Kalman contrastive (KalCo) framework for unsupervised representation learning by dictionary lookup. It builds a dynamic dictionary of encoded representation keys with a queue and a Kalman filter encoder, to which the encoded queries are matched. The large and consistent dictionaries built this way increase the accuracy of KalCo to values much higher than those of the famous momentum contrastive (MoCo) unsupervised learning, which is actually a very simplified version of KalCo with only a fixed scaler momentum coefficient. For a standard pretext task of instance discrimination on the ImageNet-1M (IN-1M) dataset; e.g., KalCo yields an accuracy of 80%, compared to 55% for MoCo. Similar results are obtained also on Instagram-1B (IG–1B). For the same task on a bunch of OpenfMRI datasets, the accuracy is 84%. We then upgrade KalCo to KalCo v2 by using an MLP projection head and more data augmentation, along also with a larger memory bank. The accuracy of KalCo v2 is around the even more impressive amounts of 90% on IN-1M and IG-1B, and 95% on OpenfMRI, the first being about 3% higher than those of three most-cited recent alternatives.
引用
收藏
相关论文
共 50 条
  • [1] A Theoretical Analysis of Contrastive Unsupervised Representation Learning
    Arora, Sanjeev
    Khandeparkar, Hrishikesh
    Khodak, Mikhail
    Plevrakis, Orestis
    Saunshi, Nikunj
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [2] Relative Contrastive Loss for Unsupervised Representation Learning
    Tang, Shixiang
    Zhu, Feng
    Bai, Lei
    Zhao, Rui
    Ouyang, Wanli
    COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 1 - 18
  • [3] PAC-Bayesian Contrastive Unsupervised Representation Learning
    Nozawa, Kento
    Germain, Pascal
    Guedj, Benjamin
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 21 - 30
  • [4] Unsupervised Sentence Representation via Contrastive Learning with Mixing Negatives
    Zhang, Yanzhao
    Zhang, Richong
    Mensah, Samuel
    Liu, Xudong
    Mao, Yongyi
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11730 - 11738
  • [5] Unsupervised Galaxy Morphological Visual Representation with Deep Contrastive Learning
    Wei, Shoulin
    Li, Yadi
    Lu, Wei
    Li, Nan
    Liang, Bo
    Dai, Wei
    Zhang, Zhijian
    PUBLICATIONS OF THE ASTRONOMICAL SOCIETY OF THE PACIFIC, 2022, 134 (1041)
  • [6] MoCoUTRL: a momentum contrastive framework for unsupervised text representation learning
    Zou, Ao
    Hao, Wenning
    Jin, Dawei
    Chen, Gang
    Sun, Feiyan
    CONNECTION SCIENCE, 2023, 35 (01)
  • [7] SMICLR: Contrastive Learning on Multiple Molecular Representations for Semisupervised and Unsupervised Representation Learning
    Pinheiro, Gabriel A.
    Silva, Juarez L. F.
    Quiles, Marcos G.
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2022, 62 (17) : 3948 - 3960
  • [8] Contrastive Unsupervised Representation Learning With Optimize-Selected Training Samples
    Cheng, Yujun
    Zhang, Zhewei
    Li, Xuejing
    Wang, Shengjin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [9] Truly Unsupervised Image-to-Image Translation with Contrastive Representation Learning
    Hong, Zhiwei
    Feng, Jianxing
    Jiang, Tao
    COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 239 - 255
  • [10] pNNCLR: Stochastic pseudo neighborhoods for contrastive learning based unsupervised representation learning problems
    Biswas, Momojit
    Buckchash, Himanshu
    Prasad, Dilip K.
    NEUROCOMPUTING, 2024, 593