Neural Network Model for Video-Based Analysis of Student's Emotions in E-Learning

被引:7
|
作者
Savchenko, A., V [1 ]
Makarov, I. A. [2 ]
机构
[1] Higher Sch Econ HSE Univ, Lab Algorithms & Technol Network Anal, Nizhnii Novgorod 603093, Russia
[2] Artificial Intelligence Res Inst AIRI, Moscow 117246, Russia
基金
俄罗斯科学基金会;
关键词
image processing; online learning; emotion classification on video; face clustering; text recognition on images;
D O I
10.3103/S1060992X22030055
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
In this paper, we consider a problem of an automatic analysis of the emotional state of students during online classes based on video surveillance data. This problem is actual in the field of e-learning. We propose a novel neural network model for recognition of students' emotions based on video images of their faces and use it to construct an algorithm for classifying the individual and group emotions of students by video clips. At the first step, it performs detection of the faces and extracts their features followed by grouping the face of each student. To increase the accuracy, we propose to match students' names selected with the aid of the algorithms of the text recognition. At the second step, specially learned efficient neural networks perform the extraction of emotional features of each selected person, their aggregation with the aid of statistical functions, and the subsequent classification. At the final step, it is possible to visualize fragments of the video lesson with the most pronounced emotions of the student. Our experiments with some datasets from EmotiW (Emotion Recognition in the Wild) show that the accuracy of the developed algorithms is comparable with their known analogous. However, when classifying emotions, the computational performance of these algorithms is higher.
引用
收藏
页码:237 / 244
页数:8
相关论文
共 50 条
  • [21] TOWARDS A TRUST MODEL IN E-LEARNING: ANTECEDENTS OF A STUDENT'S TRUST
    Wongse-ek, Woraluck
    Wills, Gary B.
    Gilbert, Lester
    PROCEEDINGS OF THE IADIS INTERNATIONAL CONFERENCE E-LEARNING 2013, 2013, : 336 - 340
  • [22] A General Student's Model Suitable for Intelligent E-Learning Systems
    De Arriaga, F.
    Gingell, C.
    Arriaga, A.
    Arriaga, J.
    Arriaga, F., Jr.
    PROCEEDINGS OF THE 2ND EUROPEAN COMPUTING CONFERENCE: NEW ASPECTS ON COMPUTERS RESEACH, 2008, : 167 - +
  • [23] Video-Based E-Learning in Communication Skills for Physicians Provides Higher Agreement to Tissue Donation
    Kruijff, P. E. Vorstius
    Huisman-Ebskamp, M. W.
    de Vos, M. L. G.
    Jansen, N. E.
    Slappendel, R.
    TRANSPLANTATION PROCEEDINGS, 2016, 48 (06) : 1867 - 1874
  • [24] Effects of Video-based e-Learning on EFL Achievement: The Mediation Effect of Behavior Control Strategies
    Chae, Soo Eun
    JOURNAL OF ASIA TEFL, 2018, 15 (02): : 398 - 413
  • [25] STUDENT EVALUATION MODEL USING BAYESIAN NETWORK IN AN INTELLIGENT E-LEARNING SYSTEM
    Chakraborty, Baisakhi
    Sinha, Meghamala
    IIOAB JOURNAL, 2016, 7 (02) : 51 - 60
  • [26] Intelligent personalised learning system based on emotions in e-learning
    Karthika R.
    Jesi V.E.
    Christo M.S.
    Deborah L.J.
    Sivaraman A.
    Kumar S.
    Personal and Ubiquitous Computing, 2023, 27 (06) : 2211 - 2223
  • [27] Video-Based Student Engagement Estimation via Time Convolution Neural Networks for Remote Learning
    Saleh, Khaled
    Yu, Kun
    Chen, Fang
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 658 - 667
  • [28] A Student's Assistant for Open e-Learning
    Lalingkar, Aparna
    Ramani, Srinivasan
    2009 INTERNATIONAL WORKSHOP ON TECHNOLOGY FOR EDUCATION (T4E 2009), 2009, : 62 - 67
  • [29] Spatiotemporal Neural Network for Video-Based Pose Estimation
    Ji, Bin
    Pan, Ye
    Jin, Xiaogang
    Yang, Xubo
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (02): : 189 - 197
  • [30] Academic Emotions Model Based on Radial Basis Function in E-Learning System
    Wang Wan-sen
    Guo Chun juan
    Liu Shuai
    APPLIED INFORMATICS AND COMMUNICATION, PT I, 2011, 224 : 639 - 646