Video-Based Emotion Recognition in the Wild for Online Education Systems

被引:3
|
作者
Mai, Genting [1 ]
Guo, Zijian [1 ]
She, Yicong [1 ]
Wang, Hongni [1 ]
Liang, Yan [1 ]
机构
[1] South China Normal Univ, Sch Software, Guangzhou 510630, Peoples R China
关键词
Online learning; Emotion recognition; Facial spatiotemporal information; Bi-LSTM; Self-attention;
D O I
10.1007/978-3-031-20868-3_38
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid development of the Internet, online learning has become one of the main ways of acquiring knowledge. In order to make teachers understand students' emotional states and adjust teaching programs on time, a new video based model called the Wild Facial Spatiotemporal Network (WFSTN) is proposed in this paper for emotion recognition in online learning environments. The model consists of two modules: a pretrained DenseNet121 for extracting facial spatial features, and a Bidirectional Long-Short Term Memory (Bi-LSTM) network with self-attention for generating attentional hidden states. In addition, a dataset of student emotions in online learning environments (DSEOLE) is produced using a self-developed online educational aid system. The method is evaluated on the Acted Facial Expressions in the Wild (AFEW) and DSEOLE datasets, achieving 72.76% and 73.67% accuracy in three-class classification, respectively. The results show that the proposed method outperforms many existing works on emotion recognition for online education.
引用
收藏
页码:516 / 529
页数:14
相关论文
共 50 条
  • [1] Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild
    Lu, Cheng
    Zheng, Wenming
    Li, Chaolong
    Tang, Chuangao
    Liu, Suyuan
    Yan, Simeng
    Zong, Yuan
    [J]. ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2018, : 646 - 652
  • [2] Video-based emotion recognition in the wild using deep transfer learning and score fusion
    Kaya, Heysem
    Gurpinar, Furkan
    Salah, Albert Ali
    [J]. IMAGE AND VISION COMPUTING, 2017, 65 : 66 - 75
  • [3] HEROES: A Video-Based Human Emotion Recognition Database
    Mannocchi, Ilaria
    Lamichhane, Kamal
    Carli, Marco
    Battisti, Federica
    [J]. 2022 10TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP), 2022,
  • [4] Audio and Video-based Emotion Recognition using Multimodal Transformers
    John, Vijay
    Kawanishi, Yasutomo
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2582 - 2588
  • [5] Online appearance model learning for video-based face recognition
    Liu, Liang
    Wang, Yunhong
    Tan, Tieniu
    [J]. 2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 2912 - +
  • [6] Video-based online face recognition using identity surfaces
    Li, YM
    Gong, SG
    Liddell, H
    [J]. IEEE ICCV WORKSHOP ON RECOGNITION, ANALYSIS AND TRACKING OF FACES AND GESTURES IN REAL-TIME SYSTEMS, PROCEEDINGS, 2001, : 40 - 46
  • [7] Multi-Attention Fusion Network for Video-based Emotion Recognition
    Wang, Yanan
    Wu, Jianming
    Hoashi, Keiichiro
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 595 - 601
  • [8] Video Emotion Recognition in the Wild Based on Fusion of Multimodal Features
    Chen, Shizhe
    Li, Xinrui
    Jin, Qin
    Zhang, Shilei
    Qin, Yong
    [J]. ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 494 - 500
  • [9] Online learning from local features for video-based face recognition
    Mian, Ajmal
    [J]. PATTERN RECOGNITION, 2011, 44 (05) : 1068 - 1075
  • [10] Online learning of probabilistic appearance manifolds for video-based recognition and tracking
    Lee, KC
    Kriegman, D
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 852 - 859