Student Attention Detection Using Multimodal Data Fusion

被引:0
|
作者
Mallibhat, Kaushik [1 ]
机构
[1] KLE Technol Univ, Sch Elect & Commun Engn, Hubballi, India
关键词
Attention; Co-learning; Electroencephalogram; Eye gaze; Machine Learning; Multimodal fusion;
D O I
10.1109/ICALT61570.2024.00092
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this work, we propose a framework for integrating the information from behavioral and cognitive spaces to perform attention profiling of a learner while engaging with digital content. Attention profiling helps examine and comprehend students' concentration, attention, and cognitive engagement patterns. Attention profiling of learners enables educators to discern the digital content types that effectively engage students, identify potential distractors, empower educators to customize learning resources and enhance students' overall learning experience. Attention profiling integrated into the Learning Management System (LNIS) environment helps students by providing feedback on the content or resources that require more focus. Several studies focus on student engagement through behavioral cues, including click stream data, time spent watching the videos, number of GIT commits, and participation in discussion forums; however, limited research is available in measuring student attention using both behavior cues and cognitive measurements. We address the problem of attention profiling of a learner using the data from behavioral and cognitive spaces. Integrating the data from both spaces necessitates a fusion technique to enhance the performance of the attention profiling of a learner. We propose to use EEG and eye gaze information from cognitive and behavioral space, respectively. We used 'Stroup test,' Sustained Attention to Response Task' (SART), and 'Continuous Performance Task' (CPT) to invoke selective attention and sustained attention states among learners. The data collected during the mentioned tests served as ground truth. Further students watched three different types of videos and we collected the data from cognitive space using Emotiv+, a 14channel head mount EEG device, and the data from the behavioral space through eye gaze information using a web camera-based solution. The advantage of the Emotiv+ device is the comprehensive coverage of sensors across both brain hemispheres, and the device's real-time data stream includes raw EEG and FFT/band power. On the other hand, to capture the on-screen and offscreen behavior of the learners, we used the L2CS-Net gaze estimation architecture built on ResN et -50. We aim to develop a coordinated multimodal data representation framework by employing co -learning methods.
引用
收藏
页码:295 / 297
页数:3
相关论文
共 50 条
  • [41] Parkinson's Disease Prediction: An Attention-Based Multimodal Fusion Framework Using Handwriting and Clinical Data
    Benredjem, Sabrina
    Mekhaznia, Tahar
    Rawad, Abdulghafor
    Turaev, Sherzod
    Bennour, Akram
    Sofiane, Bourmatte
    Aborujilah, Abdulaziz
    Al Sarem, Mohamed
    DIAGNOSTICS, 2025, 15 (01)
  • [42] Effective fake news video detection using domain knowledge and multimodal data fusion on youtube
    Choi, Hyewon
    Ko, Youngjoong
    PATTERN RECOGNITION LETTERS, 2022, 154 : 44 - 52
  • [43] Vehicle Detection in Adverse Weather: A Multi-Head Attention Approach with Multimodal Fusion
    Tabassum, Nujhat
    El-Sharkawy, Mohamed
    JOURNAL OF LOW POWER ELECTRONICS AND APPLICATIONS, 2024, 14 (02)
  • [44] A Multimodal Fusion-Based LNG Detection for Monitoring Energy Facilities (Student Abstract)
    Bin, Junchi
    Rahman, Choudhury A.
    Rogers, Shane
    Du, Shan
    Liu, Zheng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12917 - 12918
  • [45] A multimodal attention-fusion convolutional neural network for automatic detection of sleep disorders
    Wang, Weibo
    Li, Junwen
    Fang, Yu
    Zheng, Yongkang
    You, Fang
    APPLIED INTELLIGENCE, 2024, : 7086 - 7098
  • [46] Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions
    Zeng, Xiangjin
    Liu, Genghuan
    Chen, Jianming
    Wu, Xiaoyan
    Di, Jianglei
    Ren, Zhenbo
    Qin, Yuwen
    DIGITAL SIGNAL PROCESSING, 2025, 156
  • [47] Game-on: graph attention network based multimodal fusion for fake news detection
    Dhawan, Mudit
    Sharma, Shakshi
    Kadam, Aditya
    Sharma, Rajesh
    Kumaraguru, Ponnurangam
    SOCIAL NETWORK ANALYSIS AND MINING, 2024, 14 (01)
  • [48] Attention-Based Multimodal Image Feature Fusion Module for Transmission Line Detection
    Choi, Hyeyeon
    Yun, Jong Pil
    Kim, Bum Jun
    Jang, Hyeonah
    Kim, Sang Woo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (11) : 7686 - 7695
  • [49] MULTIMODAL ATTENTION FUSION FOR TARGET SPEAKER EXTRACTION
    Sato, Hiroshi
    Ochiai, Tsubasa
    Kinoshita, Keisuke
    Delcroix, Marc
    Nakatani, Tomohiro
    Araki, Shoko
    2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 778 - 784
  • [50] Attention fusion network for multimodal sentiment analysis
    Yuanyi Luo
    Rui Wu
    Jiafeng Liu
    Xianglong Tang
    Multimedia Tools and Applications, 2024, 83 : 8207 - 8217