SuperConText: Supervised Contrastive Learning Framework for Textual Representations

被引:1
|
作者
Moukafih, Youness [1 ,2 ]
Sbihi, Nada [1 ]
Ghogho, Mounir [1 ]
Smaili, Kamel [2 ]
机构
[1] Univ Int Rabat, Coll Engn & Architecture, TIC Lab, Sale 11103, Morocco
[2] Loria, Campus Sci, Vandoeuvre Les Nancy, France
关键词
Training; Task analysis; Benchmark testing; Representation learning; Entropy; Deep learning; Text categorization; contrastive learning; text classification; hard negative examples;
D O I
10.1109/ACCESS.2023.3241490
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the last decade, Deep neural networks (DNNs) have been proven to outperform conventional machine learning models in supervised learning tasks. Most of these models are typically optimized by minimizing the well-known Cross-Entropy objective function. The latter, however, has a number of drawbacks, including poor margins and instability. Taking inspiration from the recent self-supervised Contrastive representation learning approaches, we introduce Supervised Contrastive learning framework for Textual representations (SuperConText) to address those issues. We pretrain a neural network by minimizing a novel fully-supervised contrastive loss. The goal is to increase both inter-class separability and intra-class compactness of the embeddings in the latent space. Examples belonging to the same class are regarded as positive pairs, while examples belonging to different classes are considered negatives. Further, we propose a simple yet effective method for selecting hard negatives during the training phase. In extensive series of experiments, we study the impact of a number of parameters on the quality of the learned representations (e.g. the batch size). Simulation results show that the proposed solution outperforms several competing approaches on various large-scale text classification benchmarks without requiring specialized architectures, data augmentations, memory banks, or additional unsupervised data. For instance, we achieved top-1 accuracy of 61.94% on the Amazon-F dataset, which is 3.54% above the best result obtained when using the cross-entropy with the same model architecture.
引用
收藏
页码:16820 / 16830
页数:11
相关论文
共 50 条
  • [31] Weakly Supervised Contrastive Learning
    Zheng, Mingkai
    Wang, Fei
    You, Shan
    Qian, Chen
    Zhang, Changshui
    Wang, Xiaogang
    Xu, Chang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10022 - 10031
  • [32] A multi-strategy contrastive learning framework for weakly supervised semantic segmentation
    Yuan, Kunhao
    Schaefer, Gerald
    Lai, Yu-Kun
    Wang, Yifan
    Liu, Xiyao
    Guan, Lin
    Fang, Hui
    PATTERN RECOGNITION, 2023, 137
  • [33] HCNA: Hyperbolic Contrastive Learning Framework for Self-Supervised Network Alignment
    Saxena, Shruti
    Chakraborty, Roshni
    Chandra, Joydeep
    INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (05)
  • [34] DuCL: Dual-stage contrastive learning framework for Chinese semantic textual matching
    Zuo, Youhui
    Lu, Wenpeng
    Peng, Xueping
    Wang, Shoujin
    Zhang, Weiyu
    Qiao, Xinxiao
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 106
  • [35] A Simple and Effective Self-Supervised Contrastive Learning Framework for Aspect Detection
    Shi, Tian
    Li, Liuqing
    Wang, Ping
    Reddy, Chandan K.
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 13815 - 13824
  • [36] SuperLED: Supervised Contrastive Learning based Dual Path Triple Extraction Framework
    He, Luyao (heluyao@bupt.edu.cn), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [37] A class-aware supervised contrastive learning framework for imbalanced fault diagnosis
    Zhang, Jiyang
    Zou, Jianxiao
    Su, Zhiheng
    Tang, Jianxiong
    Kang, Yuhao
    Xu, Hongbing
    Liu, Zhiliang
    Fan, Shicai
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [38] SimSCL: A Simple Fully-Supervised Contrastive Learning Framework for Text Representation
    Moukafih, Youness
    Ghanem, Abdelghani
    Abidi, Karima
    Sbihi, Nada
    Ghogho, Mounir
    Smaili, Kamel
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 728 - 738
  • [39] Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network
    Bae, Sangmin
    Kim, Sungnyun
    Ko, Jongwoo
    Lee, Gihun
    Noh, Seungjong
    Yun, Se-Young
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 197 - 205
  • [40] Non-Contrastive Self-Supervised Learning of Utterance-Level Speech Representations
    Cho, Jaejin
    Pappagari, Raghavendra
    Zelasko, Piotr
    Velazquez, Laureano Moro
    Villalba, Jesus
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 4028 - 4032