Video-Based Student Engagement Estimation via Time Convolution Neural Networks for Remote Learning

被引:1
|
作者
Saleh, Khaled [1 ]
Yu, Kun [1 ]
Chen, Fang [1 ]
机构
[1] Univ Technol Sydney, Data Sci Inst, Ultimo, NSW, Australia
关键词
Engagement prediction; Time-series ConvNet; Behaviour understanding; RECOGNITION;
D O I
10.1007/978-3-030-97546-3_53
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given the recent outbreak of COVID-19 pandemic globally, most of the schools and universities have adapted many of the learning materials and lectures to be delivered online. As a result, the necessity to have some quantifiable measures of how the students are perceiving and interacting with this 'new normal' way of education became inevitable. In this work, we are focusing on the engagement metric which was shown in the literature to be a strong indicator of how students are dealing with the information and the knowledge being presented to them. In this regard, we have proposed a novel data-driven approach based on a special variant of convolutional neural networks that can predict the students' engagement levels from a video feed of students' faces. Our proposed framework has achieved a promising mean-squared error (MSE) score of only 0.07 when evaluated on a real dataset of students taking an online course. Moreover, the proposed framework has achieved superior results when compared with two baseline models that are commonly utilised in the literature for tackling this problem.
引用
收藏
页码:658 / 667
页数:10
相关论文
共 50 条
  • [1] Learning to cluster person via graph convolution networks for video-based person re-identification
    Li, Wei
    Feng, Tao
    Du, Guodong
    Liang, Sixin
    Bian, Ang
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (17):
  • [2] How Video Usage Styles Affect Student Engagement? Implications for Video-Based Learning Environments
    Giannakos, Michail N.
    Jaccheri, Letizia
    Krogstie, John
    STATE-OF-THE-ART AND FUTURE DIRECTIONS OF SMART LEARNING, 2016, : 157 - 163
  • [3] Video-based face recognition via convolutional neural networks
    Bao, Tianlong
    Ding, Chunhui
    Karmoshi, Saleem
    Zhu, Ming
    SECOND INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2017, 10443
  • [4] Hierarchical Temporal Multi-Instance Learning for Video-based Student Learning Engagement Assessment
    Ma, Jiayao
    Jiang, Xinbo
    Xu, Songhua
    Qin, Xueying
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2782 - 2789
  • [5] Video-Based Engagement Estimation of Game Streamers: An Interpretable Multimodal Neural Network Approach
    Pan, Sicheng
    Xu, Gary J. W.
    Guo, Kun
    Park, Seop Hyeong
    Ding, Hongliang
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (04) : 746 - 757
  • [6] Video-Based Emotion Estimation Using Deep Neural Networks: A Comparative Study
    Alchieri, Leonardo
    Celona, Luigi
    Bianco, Simone
    IMAGE ANALYSIS AND PROCESSING - ICIAP 2023 WORKSHOPS, PT I, 2024, 14365 : 255 - 269
  • [7] Spatiotemporal Dilated Convolution With Uncertain Matching for Video-Based Crowd Estimation
    Ma, Yu-Jen
    Shuai, Hong-Han
    Cheng, Wen-Huang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 261 - 273
  • [8] Spatiotemporal Dilated Convolution with Uncertain Matching for Video-Based Crowd Estimation
    Ma, Yu-Jen
    Shuai, Hong-Han
    Cheng, Wen-Huang
    IEEE Transactions on Multimedia, 2022, 24 : 261 - 273
  • [9] Facial Video-Based Remote Physiological Measurement via Self-Supervised Learning
    Yue, Zijie
    Shi, Miaojing
    Ding, Shuai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13844 - 13859
  • [10] Spatiotemporal Neural Network for Video-Based Pose Estimation
    Ji, Bin
    Pan, Ye
    Jin, Xiaogang
    Yang, Xubo
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (02): : 189 - 197