SuperConText: Supervised Contrastive Learning Framework for Textual Representations

被引:1
|
作者
Moukafih, Youness [1 ,2 ]
Sbihi, Nada [1 ]
Ghogho, Mounir [1 ]
Smaili, Kamel [2 ]
机构
[1] Univ Int Rabat, Coll Engn & Architecture, TIC Lab, Sale 11103, Morocco
[2] Loria, Campus Sci, Vandoeuvre Les Nancy, France
关键词
Training; Task analysis; Benchmark testing; Representation learning; Entropy; Deep learning; Text categorization; contrastive learning; text classification; hard negative examples;
D O I
10.1109/ACCESS.2023.3241490
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the last decade, Deep neural networks (DNNs) have been proven to outperform conventional machine learning models in supervised learning tasks. Most of these models are typically optimized by minimizing the well-known Cross-Entropy objective function. The latter, however, has a number of drawbacks, including poor margins and instability. Taking inspiration from the recent self-supervised Contrastive representation learning approaches, we introduce Supervised Contrastive learning framework for Textual representations (SuperConText) to address those issues. We pretrain a neural network by minimizing a novel fully-supervised contrastive loss. The goal is to increase both inter-class separability and intra-class compactness of the embeddings in the latent space. Examples belonging to the same class are regarded as positive pairs, while examples belonging to different classes are considered negatives. Further, we propose a simple yet effective method for selecting hard negatives during the training phase. In extensive series of experiments, we study the impact of a number of parameters on the quality of the learned representations (e.g. the batch size). Simulation results show that the proposed solution outperforms several competing approaches on various large-scale text classification benchmarks without requiring specialized architectures, data augmentations, memory banks, or additional unsupervised data. For instance, we achieved top-1 accuracy of 61.94% on the Amazon-F dataset, which is 3.54% above the best result obtained when using the cross-entropy with the same model architecture.
引用
收藏
页码:16820 / 16830
页数:11
相关论文
共 50 条
  • [21] A NOVEL CONTRASTIVE LEARNING FRAMEWORK FOR SELF-SUPERVISED ANOMALY DETECTION
    Li, Jingze
    Lian, Zhichao
    Li, Min
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3366 - 3370
  • [22] Image classification framework based on contrastive self-supervised learning
    Zhao H.-W.
    Zhang J.-R.
    Zhu J.-P.
    Li H.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (08): : 1850 - 1856
  • [23] A Centrifugal Pump Fault Diagnosis Framework Based on Supervised Contrastive Learning
    Ahmad, Sajjad
    Ahmad, Zahoor
    Kim, Jong-Myon
    SENSORS, 2022, 22 (17)
  • [24] HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction
    Li, Dongyang
    Zhang, Taolin
    Hu, Nan
    Wang, Chengyu
    He, Xiaofeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2567 - 2578
  • [25] vox2vec: A Framework for Self-supervised Contrastive Learning of Voxel-Level Representations in Medical Images
    Goncharov, Mikhail
    Soboleva, Vera
    Kurmukov, Anvar
    Pisov, Maxim
    Belyaev, Mikhail
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT I, 2023, 14220 : 605 - 614
  • [26] Adversarial supervised contrastive learning
    Li, Zhuorong
    Yu, Daiwei
    Wu, Minghui
    Jin, Canghong
    Yu, Hongchuan
    MACHINE LEARNING, 2023, 112 (06) : 2105 - 2130
  • [27] Supervised contrastive learning for recommendation
    Yang, Chun
    Zou, Jianxiao
    Wu, JianHua
    Xu, Hongbing
    Fan, Shicai
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [28] Self-supervised Visual Feature Learning and Classification Framework: Based on Contrastive Learning
    Wang, Zhibo
    Yan, Shen
    Zhang, Xiaoyu
    Lobo, Niels Da Vitoria
    16TH IEEE INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2020), 2020, : 719 - 725
  • [29] Adversarial supervised contrastive learning
    Zhuorong Li
    Daiwei Yu
    Minghui Wu
    Canghong Jin
    Hongchuan Yu
    Machine Learning, 2023, 112 : 2105 - 2130
  • [30] Supervised Spatially Contrastive Learning
    Nakashima, Kodai
    Kataoka, Hirokatsu
    Iwata, Kenji
    Suzuki, Ryota
    Satoh, Yutaka
    Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2022, 88 (01): : 66 - 71