Student Attention Detection Using Multimodal Data Fusion

被引:0
|
作者
Mallibhat, Kaushik [1 ]
机构
[1] KLE Technol Univ, Sch Elect & Commun Engn, Hubballi, India
关键词
Attention; Co-learning; Electroencephalogram; Eye gaze; Machine Learning; Multimodal fusion;
D O I
10.1109/ICALT61570.2024.00092
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this work, we propose a framework for integrating the information from behavioral and cognitive spaces to perform attention profiling of a learner while engaging with digital content. Attention profiling helps examine and comprehend students' concentration, attention, and cognitive engagement patterns. Attention profiling of learners enables educators to discern the digital content types that effectively engage students, identify potential distractors, empower educators to customize learning resources and enhance students' overall learning experience. Attention profiling integrated into the Learning Management System (LNIS) environment helps students by providing feedback on the content or resources that require more focus. Several studies focus on student engagement through behavioral cues, including click stream data, time spent watching the videos, number of GIT commits, and participation in discussion forums; however, limited research is available in measuring student attention using both behavior cues and cognitive measurements. We address the problem of attention profiling of a learner using the data from behavioral and cognitive spaces. Integrating the data from both spaces necessitates a fusion technique to enhance the performance of the attention profiling of a learner. We propose to use EEG and eye gaze information from cognitive and behavioral space, respectively. We used 'Stroup test,' Sustained Attention to Response Task' (SART), and 'Continuous Performance Task' (CPT) to invoke selective attention and sustained attention states among learners. The data collected during the mentioned tests served as ground truth. Further students watched three different types of videos and we collected the data from cognitive space using Emotiv+, a 14channel head mount EEG device, and the data from the behavioral space through eye gaze information using a web camera-based solution. The advantage of the Emotiv+ device is the comprehensive coverage of sensors across both brain hemispheres, and the device's real-time data stream includes raw EEG and FFT/band power. On the other hand, to capture the on-screen and offscreen behavior of the learners, we used the L2CS-Net gaze estimation architecture built on ResN et -50. We aim to develop a coordinated multimodal data representation framework by employing co -learning methods.
引用
收藏
页码:295 / 297
页数:3
相关论文
共 50 条
  • [1] Flood Detection using Semantic Segmentation and Multimodal Data Fusion
    Basnyat, Bipendra
    Roy, Nirmalya
    Gangopadhyay, Aryya
    2021 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS), 2021, : 135 - 140
  • [2] Multimodal Fusion of EEG and Eye Data for Attention Classification using Machine Learning
    Roy, Indrani Paul
    Neog, Debanga Raj
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 953 - 954
  • [3] A Multimodal Data Fusion and Embedding Attention Mechanism-Based Method for Eggplant Disease Detection
    Wang, Xinyue
    Yan, Fengyi
    Li, Bo
    Yu, Boda
    Zhou, Xingyu
    Tang, Xuechun
    Jia, Tongyue
    Lv, Chunli
    PLANTS-BASEL, 2025, 14 (05):
  • [4] Multimodal Fusion with BERT and Attention Mechanism for Fake News Detection
    Nguyen Manh Duc Tuan
    Pham Quang Nhat Minh
    2021 RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF 2021), 2021, : 43 - 48
  • [5] Multimodal False News Detection Based on Fusion Attention Mechanism
    Liu, Hualing
    Chen, Shanghui
    Qiao, Liang
    Liu, Yaxin
    Computer Engineering and Applications, 2023, 59 (09) : 95 - 103
  • [6] Multimodal Data Fusion for Depression Detection Approach
    Nykoniuk, Mariia
    Basystiuk, Oleh
    Shakhovska, Nataliya
    Melnykova, Nataliia
    COMPUTATION, 2025, 13 (01)
  • [7] Multimodal-Attention Fusion for the Detection of Questionable Content in Videos
    Morales, Arnold
    Baharlouei, Elaheh
    Solorio, Thamar
    Escalante, Hugo Jair
    PATTERN RECOGNITION, MCPR 2024, 2024, 14755 : 188 - 199
  • [8] Multimodal Fusion Induced Attention Network for Industrial VOCs Detection
    Kang, Yu
    Shi, Kehao
    Tan, Jifang
    Cao, Yang
    Zhao, Lijun
    Xu, Zhenyi
    IEEE Transactions on Artificial Intelligence, 2024, 5 (12): : 6385 - 6398
  • [9] Attention Bottlenecks for Multimodal Fusion
    Nagrani, Arsha
    Yang, Shan
    Arnab, Anurag
    Jansen, Aren
    Schmid, Cordelia
    Sun, Chen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Multimodal Fusion with Co-Attention Networks for Fake News Detection
    Wu, Yang
    Zhan, Pengwei
    Zhang, Yunjian
    Wang, Liming
    Xu, Zhen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2560 - 2569