Active Learning with Contrastive Pre-training for Facial Expression Recognition

被引:1
|
作者
Roy, Shuvendu [1 ,2 ]
Etemad, Ali [1 ,2 ]
机构
[1] Queens Univ, Dept ECE, Kingston, ON, Canada
[2] Queens Univ, Ingenu Labs Res Inst, Kingston, ON, Canada
关键词
Facial Expression Recognition; Semi-supervised Learning; Contrastive Learning;
D O I
10.1109/ACIIW59127.2023.10388093
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has played a significant role in the success of facial expression recognition (FER), thanks to large models and vast amounts of labelled data. However, obtaining labelled data requires a tremendous amount of human effort, time, and financial resources. Even though some prior works have focused on reducing the need for large amounts of labelled data using different unsupervised methods, another promising approach called active learning is barely explored in the context of FER. This approach involves selecting and labelling the most representative samples from an unlabelled set to make the best use of a limited 'labelling budget'. In this paper, we implement and study 8 recent active learning methods on three public FER datasets, FER13, RAF-DB, and KDEF. Our findings show that existing active learning methods do not perform well in the context of FER, likely suffering from a phenomenon called 'Cold Start', which occurs when the initial set of labelled samples is not well representative of the entire dataset. To address this issue, we propose contrastive self-supervised pre-training, which first learns the underlying representations based on the entire unlabelled dataset. We then follow this with the active learning methods and observe that our 2-step approach shows up to 9.2% improvement over random sampling and up to 6.7% improvement over the best existing active learning baseline without the pre-training. We will make the code for this study public upon publication at: github.com/ShuvenduRoy/ActiveFER.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Non-Contrastive Learning Meets Language-Image Pre-Training
    Zhou, Jinghao
    Dong, Li
    Gan, Zhe
    Wang, Lijuan
    Wei, Furu
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11028 - 11038
  • [22] Pre-training local and non-local geographical influences with contrastive learning
    Oh, Byungkook
    Suh, Ilhyun
    Cha, Kihoon
    Kim, Junbeom
    Park, Goeon
    Jeong, Sihyun
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [23] Learning Transferable User Representations with Sequential Behaviors via Contrastive Pre-training
    Cheng, Mingyue
    Yuan, Fajie
    Liu, Qi
    Xin, Xin
    Chen, Enhong
    [J]. 2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 51 - 60
  • [24] Dynamic facial expression recognition with pseudo-label guided multi-modal pre-training
    Yin, Bing
    Yin, Shi
    Liu, Cong
    Zhang, Yanyong
    Xi, Changfeng
    Yin, Baocai
    Ling, Zhenhua
    [J]. IET COMPUTER VISION, 2024, 18 (01) : 33 - 45
  • [25] Combining 3D Convolutional Neural Networks with Transfer Learning by Supervised Pre-Training for Facial Micro-Expression Recognition
    Zhi, Ruicong
    Xu, Hairui
    Wan, Ming
    Li, Tingting
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (05) : 1054 - 1064
  • [26] Contrastive Language-knowledge Graph Pre-training
    Yuan, Xiaowei
    Liu, Kang
    Wang, Yequan
    [J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (04)
  • [27] Multi-Modal Contrastive Pre-training for Recommendation
    Liu, Zhuang
    Ma, Yunpu
    Schubert, Matthias
    Ouyang, Yuanxin
    Xiong, Zhang
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 99 - 108
  • [28] Multilingual Pre-training Model-Assisted Contrastive Learning Neural Machine Translation
    Sun, Shuo
    Hou, Hong-xu
    Yang, Zong-heng
    Wang, Yi-song
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [29] Vision Language Pre-training by Contrastive Learning with Cross-Modal Similarity Regulation
    Jiang, Chaoya
    Ye, Wei
    Xu, Haiyang
    Huang, Songfang
    Huang, Fei
    Zhang, Shikun
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 14660 - 14679
  • [30] Graph Contrastive Multi-view Learning: A Pre-training Framework for Graph Classification
    Adjeisah, Michael
    Zhu, Xinzhong
    Xu, Huiying
    Ayall, Tewodros Alemu
    [J]. Knowledge-Based Systems, 2024, 299