SuperConText: Supervised Contrastive Learning Framework for Textual Representations

被引:1
|
作者
Moukafih, Youness [1 ,2 ]
Sbihi, Nada [1 ]
Ghogho, Mounir [1 ]
Smaili, Kamel [2 ]
机构
[1] Univ Int Rabat, Coll Engn & Architecture, TIC Lab, Sale 11103, Morocco
[2] Loria, Campus Sci, Vandoeuvre Les Nancy, France
关键词
Training; Task analysis; Benchmark testing; Representation learning; Entropy; Deep learning; Text categorization; contrastive learning; text classification; hard negative examples;
D O I
10.1109/ACCESS.2023.3241490
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the last decade, Deep neural networks (DNNs) have been proven to outperform conventional machine learning models in supervised learning tasks. Most of these models are typically optimized by minimizing the well-known Cross-Entropy objective function. The latter, however, has a number of drawbacks, including poor margins and instability. Taking inspiration from the recent self-supervised Contrastive representation learning approaches, we introduce Supervised Contrastive learning framework for Textual representations (SuperConText) to address those issues. We pretrain a neural network by minimizing a novel fully-supervised contrastive loss. The goal is to increase both inter-class separability and intra-class compactness of the embeddings in the latent space. Examples belonging to the same class are regarded as positive pairs, while examples belonging to different classes are considered negatives. Further, we propose a simple yet effective method for selecting hard negatives during the training phase. In extensive series of experiments, we study the impact of a number of parameters on the quality of the learned representations (e.g. the batch size). Simulation results show that the proposed solution outperforms several competing approaches on various large-scale text classification benchmarks without requiring specialized architectures, data augmentations, memory banks, or additional unsupervised data. For instance, we achieved top-1 accuracy of 61.94% on the Amazon-F dataset, which is 3.54% above the best result obtained when using the cross-entropy with the same model architecture.
引用
收藏
页码:16820 / 16830
页数:11
相关论文
共 50 条
  • [41] Self-Supervised Learning of Remote Sensing Scene Representations Using Contrastive Multiview Coding
    Stojnic, Vladan
    Risojevic, Vladimir
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 1182 - 1191
  • [42] Supervised Contrastive Learning for Afect Modelling
    Pinitas, Kosmas
    Makantasis, Konstantinos
    Liapis, Antonios
    Yannakakis, Georgios N.
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 531 - 539
  • [43] Supervised Contrastive Learning for Product Classification
    Azizi, Sahel
    Fang, Uno
    Adibi, Sasan
    Li, Jianxin
    ADVANCED DATA MINING AND APPLICATIONS, ADMA 2021, PT II, 2022, 13088 : 341 - 355
  • [44] Contrastive Learning for Supervised Graph Matching
    Ratnayaka, Gathika
    Wang, Qing
    Li, Yang
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1718 - 1729
  • [45] On Learning Contrastive Representations for Learning with Noisy Labels
    Yi, Li
    Liu, Sheng
    She, Qi
    McLeod, A. Ian
    Wang, Boyu
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16661 - 16670
  • [46] Supervised Contrastive Learning for Product Matching
    Peeters, Ralph
    Bizer, Christian
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 248 - 251
  • [47] Contrastive Similarity Matching for Supervised Learning
    Qin, Shanshan
    Mudur, Nayantara
    Pehlevan, Cengiz
    NEURAL COMPUTATION, 2021, 33 (05) : 1300 - 1328
  • [48] Distantly supervised relation extraction with a Meta-Relation enhanced Contrastive learning framework
    Chen, Chuanshu
    Hao, Shuang
    Liu, Jian
    NEUROCOMPUTING, 2025, 617
  • [49] Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise
    Fooladgar, Fahimeh
    Minh Nguyen Nhat To
    Mousavi, Parvin
    Abolmaesumi, Purang
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2024, : 4012 - 4021
  • [50] Contrastive self-supervised representation learning framework for metal surface defect detection
    Mahe Zabin
    Anika Nahian Binte Kabir
    Muhammad Khubayeeb Kabir
    Ho-Jin Choi
    Jia Uddin
    Journal of Big Data, 10