Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

被引:1
|
作者
Xie, Tianshu [1 ]
Yang, Yuhang [2 ]
Ding, Zilin [2 ]
Cheng, Xuan [2 ]
Wang, Xiaomin [2 ]
Gong, Haigang [2 ]
Liu, Ming [2 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Quzhou, Quzhou 324003, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[3] Wenzhou Med Univ, Quzhou Affiliated Hosp, Quzhou Peoples Hosp, Quzhou 324000, Peoples R China
关键词
Task analysis; Training; Self-supervised learning; Visualization; Supervised learning; Semantics; Predictive models; Deep learning; classification; self-supervised learning; convolutional neural network; feature transformation;
D O I
10.1109/ACCESS.2022.3233104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional self-supervised learning requires convolutional neural networks (CNNs) using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called internal pretext task. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.
引用
收藏
页码:1708 / 1717
页数:10
相关论文
共 50 条
  • [1] Mixup Feature: A Pretext Task Self-Supervised Learning Method for Enhanced Visual Feature Learning
    Xu, Jiashu
    Stirenko, Sergii
    [J]. IEEE ACCESS, 2023, 11 : 82400 - 82409
  • [2] Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    [J]. INTERSPEECH 2021, 2021, : 2851 - 2855
  • [3] Self-Supervised Learning of Pretext-Invariant Representations
    Misra, Ishan
    van der Maaten, Laurens
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6706 - 6716
  • [4] Organoids Segmentation using Self-Supervised Learning: How Complex Should the Pretext Task Be?
    Haja, Asmaa
    van der Woude, Bart
    Schomaker, Lambert
    [J]. 2023 10TH INTERNATIONAL CONFERENCE ON BIOMEDICAL AND BIOINFORMATICS ENGINEERING, ICBBE 2023, 2023, : 17 - 27
  • [5] On Feature Decorrelation in Self-Supervised Learning
    Hua, Tianyu
    Wang, Wenxiao
    Xue, Zihui
    Ren, Sucheng
    Wang, Yue
    Zhao, Hang
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9578 - 9588
  • [6] Skin lesion classification based on hybrid self-supervised pretext task
    Yang, Dedong
    Zhang, Jianwen
    Li, Yangyang
    Ling, Zhiquan
    [J]. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (02)
  • [7] Self-Supervised Learning of Visual Robot Localization Using LED State Prediction as a Pretext Task
    Nava, Mirko
    Carlotti, Nicholas
    Crupi, Luca
    Palossi, Daniele
    Giusti, Alessandro
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (04) : 3363 - 3370
  • [8] ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot Learning
    Rong, Yi
    Lu, Xiongbo
    Sun, Zhaoyang
    Chen, Yaxiong
    Xiong, Shengwu
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9596 - 9605
  • [9] ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets
    Kinakh, Vitaliy
    Taran, Olga
    Voloshynovskiy, Svyatoslav
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1098 - 1106
  • [10] Digging Into Self-Supervised Learning of Feature Descriptors
    Melekhov, Iaroslav
    Laskar, Zakaria
    Li, Xiaotian
    Wang, Shuzhe
    Kannala, Juho
    [J]. 2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 1144 - 1155