Gestatten: Estimation of User's Attention in Mobile MOOCs from Eye Gaze and Gaze Gesture Tracking

被引:5
|
作者
Kar P. [1 ]
Chattopadhyay S. [1 ]
Chakraborty S. [2 ]
机构
[1] Jadavpur University, Salt Lake, Kolkata, West Bengal
[2] Indian Institute of Technology Kharagpur, Kharagpur
关键词
attention estimation; gaze gesture; MOOC; region of gaze;
D O I
10.1145/3394974
中图分类号
学科分类号
摘要
The rapid proliferation of Massive Open Online Courses (MOOC) has resulted in many-fold increase in sharing the global classrooms through customized online platforms, where a student can participate in the classes through her personal devices, such as personal computers, smartphones, tablets, etc. However, in the absence of direct interactions with the students during the delivery of the lectures, it becomes difficult to judge their involvements in the classroom. In academics, the degree of student's attention can indicate whether a course is efficacious in terms of clarity and information. An automated feedback can hence be generated to enhance the utility of the course. The precision of discernment in the context of human attention is a subject of surveillance. However, visual patterns indicating the magnitude of concentration can be deciphered by analyzing the visual emphasis and the way an individual visually gesticulates, while contemplating the object of interest. In this paper, we develop a methodology called Gestsatten which captures the learner's attentiveness from his visual gesture patterns. In this approach, the learner's visual gestures are tracked along with the region of focus. We consider two aspects in this approach - first, we do not transfer learner's video outside her device, so we apply in-device computing to protect her privacy; second, considering the fact that a majority of the learners use handheld devices like smartphones to observe the MOOC videos, we develop a lightweight approach for in-device computation. A three level estimation of learner's attention is performed based on these information. We have implemented and tested Gestatten over 48 participants from different age groups, and we observe that the proposed technique can capture the attention level of a learner with high accuracy (average absolute error rate is 8.68%), which meets her ability to learn a topic as measured through a set of cognitive tests. © 2020 ACM.
引用
收藏
相关论文
共 50 条
  • [41] Eye gaze estimation from the elliptical features of one iris
    Zhang, Wen
    Zhang, Tai-Ning
    Chang, Sheng-Jiang
    OPTICAL ENGINEERING, 2011, 50 (04)
  • [42] Self-Attention with Convolution and Deconvolution for Efficient Eye Gaze Estimation from a Full Face Image
    Oh, Jun O.
    Chang, Hyung Jin
    Choi, Sang-Il
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4988 - 4996
  • [43] Gaze-communicative Behavior of Stuffed-toy Robot with Joint Attention and Eye Contact based on Ambient Gaze-tracking
    Yonezawa, Tomoko
    Yamazoe, Hirotake
    Utsumi, Akira
    Abe, Shinji
    ICMI'07: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, 2007, : 140 - 145
  • [44] PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services
    Du, Lingyu
    Jia, Jinyuan
    Zhang, Xucong
    Lan, Guohao
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (03):
  • [45] Variation in Attention at Encoding: Insights From Pupillometry and Eye Gaze Fixations
    Miller, Ashley L.
    Unsworth, Nash
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2020, 46 (12) : 2277 - 2294
  • [46] Using mobile eye tracking for gaze- and head-contingent vision simulations
    Sauer, Yannick
    Severitt, Bjorn
    Agarwala, Rajat
    Wahl, Siegfried
    PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,
  • [47] Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze
    Barz, Michael
    Sonntag, Daniel
    SENSORS, 2021, 21 (12)
  • [48] Visual attention based ROI maps from gaze tracking data
    Nguyen, A
    Chandran, V
    Sridharan, S
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5, 2004, : 3495 - 3498
  • [49] Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments
    Celiktutan, Oya
    Demiris, Yiannis
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT VI, 2019, 11134 : 193 - 209
  • [50] Implicit User Calibration for Gaze-Tracking Systems Using Kernel Density Estimation
    Miki, Kohei
    Nagamatsu, Takashi
    Hansen, Dan Witzner
    2016 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS (ETRA 2016), 2016, : 249 - 252