Mixup Feature: A Pretext Task Self-Supervised Learning Method for Enhanced Visual Feature Learning

被引:0
|
作者
Xu, Jiashu [1 ]
Stirenko, Sergii [1 ]
机构
[1] Natl Tech Univ Ukraine, Comp Engn Dept, Igor Sikorsky Kyiv Polytech Inst, UA-03056 Kiev, Ukraine
关键词
Computer vision; mixup feature; self-supervised learning; masked autoencoder;
D O I
10.1109/ACCESS.2023.3301561
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised learning has emerged as an increasingly popular research topic within the field of computer vision. In this study, we propose a novel self-supervised learning approach based on Mixup features as pretext tasks. The proposed method aims to learn visual representations by predicting the MixupFeature of a masked image, which serves as a proxy for higher-level semantic information. Specifically, we investigate the efficacy of Mixup features as the prediction target for self-supervised learning. By setting the hyperparameter ? through Mixup operations, pairwise combinations of Sobel edge feature maps, HOG feature maps, and LBP feature maps are created. We employ the vision transformer as the backbone network, drawing inspiration from masked autoencoders (MAE). We evaluate the proposed method on three benchmark datasets, namely Cifar-10, Cifar-100, and STL-10, and compare it with other state-ofthe-art self-supervised learning approaches. The experiments demonstrate that mixed HOG-Sobel feature maps after Mixup achieve the best results in fine-tuning experiments on Cifar-10 and STL-10. Furthermore, compared to contrastive learning-based self-supervised learning methods, our approach proves to be more efficient, with shorter training durations and no reliance on data augmentation. When compared to generative self-supervised learning approaches based on MAE, the average performance improvement is 0.4%. Overall, the proposed self-supervised learning method based on Mixup features offers a promising direction for future research in the computer vision domain and has the potential to enhance performance across various downstream tasks. Our code will be published in GitHub.
引用
收藏
页码:82400 / 82409
页数:10
相关论文
共 50 条
  • [11] Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning
    Song, Kaiyou
    Zhang, Shan
    Luo, Zimeng
    Wang, Tong
    Xie, Jin
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16053 - 16062
  • [12] GAN-Based Image Colorization for Self-Supervised Visual Feature Learning
    Treneska, Sandra
    Zdravevski, Eftim
    Pires, Ivan Miguel
    Lameski, Petre
    Gievska, Sonja
    [J]. SENSORS, 2022, 22 (04)
  • [13] Feature selection and cascade dimensionality reduction for self-supervised visual representation learning
    Qu, Peixin
    Jin, Songlin
    Tian, Yongqin
    Zhou, Ling
    Zheng, Ying
    Zhang, Weidong
    Xu, Yibo
    Pan, Xipeng
    Zhao, Wenyi
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2023, 106
  • [14] Concurrent Discrimination and Alignment for Self-Supervised Feature Learning
    Dutta, Anjan
    Mancini, Massimiliano
    Akata, Zeynep
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 2189 - 2198
  • [15] FLSL: Feature-level Self-supervised Learning
    Su, Qing
    Netchaev, Anton
    Li, Hai
    Ji, Shihao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [16] Self-Supervised Representation Learning by Rotation Feature Decoupling
    Feng, Zeyu
    Xu, Chang
    Tao, Dacheng
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10356 - 10366
  • [17] Self-Supervised Learning Disentangled Group Representation as Feature
    Wang, Tan
    Yue, Zhongqi
    Huang, Jianqiang
    Sun, Qianru
    Zhang, Hanwang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [18] FundusNet, A self-supervised contrastive learning framework for Fundus Feature Learning
    Mojab, Nooshin
    Alam, Minhaj
    Hallak, Joelle
    [J]. INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [19] Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning
    Wen, Zixin
    Li, Yuanzhi
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [20] Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    [J]. INTERSPEECH 2021, 2021, : 2851 - 2855