MICD: More intra-class diversity in few-shot text classification with many classes

被引:0
|
作者
Jang, Gwangseon [1 ,3 ]
Jeong, Hyeon Ji [2 ]
Yi, Mun Yong [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Grad Sch Data Sci, Daejeon, South Korea
[2] Kongju Natl Univ, Sch Artificial Intelligence, Cheonan, South Korea
[3] Korea Inst Sci & Technol Informat KISTI, Large Scale AI Res Grp, Daejeon, South Korea
关键词
Few-shot text classification; Many classes; Intra-class diversity; Contrastive loss; Data augmentation;
D O I
10.1016/j.knosys.2024.112851
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning has gained much interest and achieved remarkable performance in handling limited data scenarios. However, existing few-shot text classification methods typically aim at classifying a limited number of classes, usually ranging from 5 to 10, posing a challenge for many real-world tasks that require few-shot text classification for many classes. Few-shot text classification for many classes has rarely been studied and it is a challenging problem. Distinguishing differences among many classes is more challenging than distinguishing differences among small classes. To address this issue, we propose anew few-shot text classification model for many classes called MICD (More Intra-Class Diversity in few-shot text classification with many classes). Our model comprises two crucial components: Intra-Class Diversity Contrastive Learning (ICDCL) and Intra-Class Augmentation (ICA). ICDCL trains an encoder to enhance feature discriminability by maintaining both intraclass diversity and inter-class specificity, effectively improving generalization performance, even when data is limited. ICA addresses data scarcity by selecting diverse support samples and applying intra-class mix-up, enabling robust generalization to out-of-distribution data-an essential consideration in many-class few-shot learning scenarios. Experimental results on four real datasets show that MICD provides significant performance improvement over the other state-of-the-art approaches.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Hierarchical Verbalizer for Few-Shot Hierarchical Text Classification
    Ji, Ke
    Lian, Yixin
    Gao, Jingsheng
    Wang, Baoyuan
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2918 - 2933
  • [22] R2Net: relative relation network with intra-class local augmentation for few-shot learning
    Bi, Yuandong
    Zhu, Hong
    Shi, Jing
    Song, Bin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (6-7) : 5061 - 5071
  • [23] Leveraging Inter-class Differences and Label Semantics for Few-Shot Text Classification
    Xie, Xinran
    Chen, Rui
    Peng, Tailai
    Cui, Zhe
    Chen, Zheng
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 694 - 706
  • [24] Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing
    Liu, Han
    Zhao, Siyang
    Zhang, Xiaotong
    Zhang, Feng
    Wang, Wei
    Ma, Fenglong
    Chen, Hongyang
    Yu, Hong
    Zhang, Xianchao
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18644 - 18652
  • [25] LDMP-RENet: Reducing intra-class differences for metal surface defect few-shot semantic segmentation
    Zhang, Jiyan
    Ding, Hanze
    Wu, Zhangkai
    Peng, Ming
    Liu, Yanfang
    PLOS ONE, 2025, 20 (03):
  • [26] Diversity with Cooperation: Ensemble Methods for Few-Shot Classification
    Dvornik, Nikita
    Schmid, Cordelia
    Mairal, Julien
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3722 - 3730
  • [27] Mask-guided BERT for few-shot text classification
    Liao, Wenxiong
    Liu, Zhengliang
    Dai, Haixing
    Wu, Zihao
    Zhang, Yiyang
    Huang, Xiaoke
    Chen, Yuzhong
    Jiang, Xi
    Liu, David
    Zhu, Dajiang
    Li, Sheng
    Liu, Wei
    Liu, Tianming
    Li, Quanzheng
    Cai, Hongmin
    Li, Xiang
    NEUROCOMPUTING, 2024, 610
  • [28] ContrastNet: A Contrastive Learning Framework for Few-Shot Text Classification
    Chen, Junfan
    Zhang, Richong
    Mao, Yongyi
    Xu, Jie
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10492 - 10500
  • [29] Enhanced Prompt Learning for Few-shot Text Classification Method
    Li R.
    Wei Z.
    Fan Y.
    Ye S.
    Zhang G.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2024, 60 (01): : 1 - 12
  • [30] Mutual Learning Prototype Network for Few-Shot Text Classification
    Liu, Jun
    Qin, Xiaorui
    Tao, Jian
    Dong, Hongfei
    Li, Xiaoxu
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2024, 47 (03): : 30 - 35