Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

被引:1
|
作者
Xie, Tianshu [1 ]
Yang, Yuhang [2 ]
Ding, Zilin [2 ]
Cheng, Xuan [2 ]
Wang, Xiaomin [2 ]
Gong, Haigang [2 ]
Liu, Ming [2 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Quzhou, Quzhou 324003, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[3] Wenzhou Med Univ, Quzhou Affiliated Hosp, Quzhou Peoples Hosp, Quzhou 324000, Peoples R China
关键词
Task analysis; Training; Self-supervised learning; Visualization; Supervised learning; Semantics; Predictive models; Deep learning; classification; self-supervised learning; convolutional neural network; feature transformation;
D O I
10.1109/ACCESS.2022.3233104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traditional self-supervised learning requires convolutional neural networks (CNNs) using external pretext tasks (i.e., image- or video-based tasks) to encode high-level semantic visual representations. In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called internal pretext task. And such a task can be applied for the enhancement of supervised learning. Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels. CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels. By doing so, we let CNNs know which channels are missing while classifying in the hope to mine richer feature information. Extensive experiments show that our approach is effective on various models and datasets. And it's worth noting that we only incur negligible computational overhead. Furthermore, our approach can also be compatible with other methods to get better results.
引用
收藏
页码:1708 / 1717
页数:10
相关论文
共 50 条
  • [31] Self-supervised contrastive learning for heterogeneous graph based on multi-pretext tasks
    Shuai Ma
    Jian-wei Liu
    Neural Computing and Applications, 2023, 35 : 10275 - 10296
  • [32] PT4AL: Using Self-supervised Pretext Tasks for Active Learning
    Yi, John Seon Keun
    Seo, Minseok
    Park, Jongchan
    Choi, Dong-Geol
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 596 - 612
  • [33] Self-supervised contrastive learning for heterogeneous graph based on multi-pretext tasks
    Ma, Shuai
    Liu, Jian-wei
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10275 - 10296
  • [34] Multi-task Semantic Matching with Self-supervised Learning
    Chen Y.
    Qiu X.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2022, 58 (01): : 83 - 90
  • [35] FundusNet, A self-supervised contrastive learning framework for Fundus Feature Learning
    Mojab, Nooshin
    Alam, Minhaj
    Hallak, Joelle
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [36] Multi-task Self-Supervised Adaptation for Reinforcement Learning
    Wu, Keyu
    Chen, Zhenghua
    Wu, Min
    Xiang, Shili
    Jin, Ruibing
    Zhang, Le
    Li, Xiaoli
    2022 IEEE 17TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2022, : 15 - 20
  • [37] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [38] Feature Augmentation for Self-supervised Contrastive Learning: A Closer Look
    Zhang, Yong
    Zhu, Rui
    Zhang, Shifeng
    Zhou, Xu
    Chen, Shifeng
    Chen, Xiaofan
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [39] Multi-Task Self-Supervised Learning for Disfluency Detection
    Wang, Shaolei
    Che, Wanxiang
    Liu, Qi
    Qin, Pengda
    Liu, Ting
    Wang, William Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9193 - 9200
  • [40] Joint-task Self-supervised Learning for Temporal Correspondence
    Li, Xueting
    Liu, Sifei
    De Mello, Shalini
    Wang, Xiaolong
    Kautz, Jan
    Yang, Ming-Hsuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32