Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

被引:1
|
作者
Xie, Tianshu [1 ]
Yang, Yuhang [2 ]
Ding, Zilin [2 ]
Cheng, Xuan [2 ]
Wang, Xiaomin [2 ]
Gong, Haigang [2 ]
Liu, Ming [2 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Quzhou, Quzhou 324003, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[3] Wenzhou Med Univ, Quzhou Affiliated Hosp, Quzhou Peoples Hosp, Quzhou 324000, Peoples R China
关键词
Task analysis; Training; Self-supervised learning; Visualization; Supervised learning; Semantics; Predictive models; Deep learning; classification; self-supervised learning; convolutional neural network; feature transformation;
D O I
10.1109/ACCESS.2022.3233104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional self-supervised learning requires convolutional neural networks (CNNs) using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called internal pretext task. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.
引用
收藏
页码:1708 / 1717
页数:10
相关论文
共 50 条
  • [21] Three-Dimension Attention Mechanism and Self-Supervised Pretext Task for Augmenting Few-Shot Learning
    Liang, Yong
    Chen, Zetao
    Lin, Daoqian
    Tan, Junwen
    Yang, Zhenhao
    Li, Jie
    Li, Xinhai
    [J]. IEEE ACCESS, 2023, 11 : 59428 - 59437
  • [22] Self-Supervised Learning Disentangled Group Representation as Feature
    Wang, Tan
    Yue, Zhongqi
    Huang, Jianqiang
    Sun, Qianru
    Zhang, Hanwang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] Self-Supervised Representation Learning by Rotation Feature Decoupling
    Feng, Zeyu
    Xu, Chang
    Tao, Dacheng
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10356 - 10366
  • [24] FLSL: Feature-level Self-supervised Learning
    Su, Qing
    Netchaev, Anton
    Li, Hai
    Ji, Shihao
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [25] Concurrent Discrimination and Alignment for Self-Supervised Feature Learning
    Dutta, Anjan
    Mancini, Massimiliano
    Akata, Zeynep
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 2189 - 2198
  • [26] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    [J]. INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [27] INVESTIGATING SELF-SUPERVISED LEARNING FOR SPEECH ENHANCEMENT AND SEPARATION
    Huang, Zili
    Watanabe, Shinji
    Yang, Shu-wen
    Garcia, Paola
    Khudanpur, Sanjeev
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6837 - 6841
  • [28] Industrial Image Anomaly Detection via Self-Supervised Learning with Feature Enhancement Assistance
    Wu, Bin
    Wang, Xiaoqi
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (16):
  • [29] HAIC-NET: Semi-supervised OCTA vessel segmentation with self-supervised pretext task and dual consistency training
    Shen, Hailan
    Tang, Zheng
    Li, Yajing
    Duan, Xuanchu
    Chen, Zailiang
    [J]. PATTERN RECOGNITION, 2024, 151
  • [30] PT4AL: Using Self-supervised Pretext Tasks for Active Learning
    Yi, John Seon Keun
    Seo, Minseok
    Park, Jongchan
    Choi, Dong-Geol
    [J]. COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 596 - 612