Active Learning with Contrastive Pre-training for Facial Expression Recognition

被引:1
|
作者
Roy, Shuvendu [1 ,2 ]
Etemad, Ali [1 ,2 ]
机构
[1] Queens Univ, Dept ECE, Kingston, ON, Canada
[2] Queens Univ, Ingenu Labs Res Inst, Kingston, ON, Canada
关键词
Facial Expression Recognition; Semi-supervised Learning; Contrastive Learning;
D O I
10.1109/ACIIW59127.2023.10388093
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has played a significant role in the success of facial expression recognition (FER), thanks to large models and vast amounts of labelled data. However, obtaining labelled data requires a tremendous amount of human effort, time, and financial resources. Even though some prior works have focused on reducing the need for large amounts of labelled data using different unsupervised methods, another promising approach called active learning is barely explored in the context of FER. This approach involves selecting and labelling the most representative samples from an unlabelled set to make the best use of a limited 'labelling budget'. In this paper, we implement and study 8 recent active learning methods on three public FER datasets, FER13, RAF-DB, and KDEF. Our findings show that existing active learning methods do not perform well in the context of FER, likely suffering from a phenomenon called 'Cold Start', which occurs when the initial set of labelled samples is not well representative of the entire dataset. To address this issue, we propose contrastive self-supervised pre-training, which first learns the underlying representations based on the entire unlabelled dataset. We then follow this with the active learning methods and observe that our 2-step approach shows up to 9.2% improvement over random sampling and up to 6.7% improvement over the best existing active learning baseline without the pre-training. We will make the code for this study public upon publication at: github.com/ShuvenduRoy/ActiveFER.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Robust Pre-Training by Adversarial Contrastive Learning
    Jiang, Ziyu
    Chen, Tianlong
    Chen, Ting
    Wang, Zhangyang
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] New Intent Discovery with Pre-training and Contrastive Learning
    Zhang, Yuwei
    Zhang, Haode
    Zhan, Li-Ming
    Wu, Xiao-Ming
    Lam, Albert Y. S.
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 256 - 269
  • [3] Image Difference Captioning with Pre-training and Contrastive Learning
    Yao, Linli
    Wang, Weiying
    Jin, Qin
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3108 - 3116
  • [4] PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
    Liu, Hongbin
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 3629 - 3645
  • [5] A Contrastive Learning Pre-Training Method for Motif Occupancy Identification
    Lin, Ken
    Quan, Xiongwen
    Yin, Wenya
    Zhang, Han
    [J]. INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2022, 23 (09)
  • [6] Multilingual Molecular Representation Learning via Contrastive Pre-training
    Guo, Zhihui
    Sharma, Pramod
    Martinez, Andy
    Du, Liang
    Abraham, Robin
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
  • [7] Vision-Language Pre-Training with Triple Contrastive Learning
    Yang, Jinyu
    Duan, Jiali
    Tran, Son
    Xu, Yi
    Chanda, Sampath
    Chen, Liqun
    Zeng, Belinda
    Chilimbi, Trishul
    Huang, Junzhou
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15650 - 15659
  • [8] Pre-training Strategies and Datasets for Facial Representation Learning
    Bulat, Adrian
    Cheng, Shiyang
    Yang, Jing
    Garbett, Andrew
    Sanchez, Enrique
    Tzimiropoulos, Georgios
    [J]. COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 107 - 125
  • [9] VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
    Chen, Qibin
    Lacomis, Jeremy
    Schwartz, Edward J.
    Neubig, Graham
    Vasilescu, Bogdan
    Le Goues, Claire
    [J]. 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2327 - 2339
  • [10] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032