Student Attention Detection Using Multimodal Data Fusion

被引:0
|
作者
Mallibhat, Kaushik [1 ]
机构
[1] KLE Technol Univ, Sch Elect & Commun Engn, Hubballi, India
关键词
Attention; Co-learning; Electroencephalogram; Eye gaze; Machine Learning; Multimodal fusion;
D O I
10.1109/ICALT61570.2024.00092
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this work, we propose a framework for integrating the information from behavioral and cognitive spaces to perform attention profiling of a learner while engaging with digital content. Attention profiling helps examine and comprehend students' concentration, attention, and cognitive engagement patterns. Attention profiling of learners enables educators to discern the digital content types that effectively engage students, identify potential distractors, empower educators to customize learning resources and enhance students' overall learning experience. Attention profiling integrated into the Learning Management System (LNIS) environment helps students by providing feedback on the content or resources that require more focus. Several studies focus on student engagement through behavioral cues, including click stream data, time spent watching the videos, number of GIT commits, and participation in discussion forums; however, limited research is available in measuring student attention using both behavior cues and cognitive measurements. We address the problem of attention profiling of a learner using the data from behavioral and cognitive spaces. Integrating the data from both spaces necessitates a fusion technique to enhance the performance of the attention profiling of a learner. We propose to use EEG and eye gaze information from cognitive and behavioral space, respectively. We used 'Stroup test,' Sustained Attention to Response Task' (SART), and 'Continuous Performance Task' (CPT) to invoke selective attention and sustained attention states among learners. The data collected during the mentioned tests served as ground truth. Further students watched three different types of videos and we collected the data from cognitive space using Emotiv+, a 14channel head mount EEG device, and the data from the behavioral space through eye gaze information using a web camera-based solution. The advantage of the Emotiv+ device is the comprehensive coverage of sensors across both brain hemispheres, and the device's real-time data stream includes raw EEG and FFT/band power. On the other hand, to capture the on-screen and offscreen behavior of the learners, we used the L2CS-Net gaze estimation architecture built on ResN et -50. We aim to develop a coordinated multimodal data representation framework by employing co -learning methods.
引用
收藏
页码:295 / 297
页数:3
相关论文
共 50 条
  • [31] BEATS: Bengali Speech Acts Recognition using Multimodal Attention Fusion
    Deb, Ahana
    Nag, Sayan
    Mahapatra, Ayan
    Chattopadhyay, Soumitri
    Marik, Aritra
    Gayen, Pijush Kanti
    Sanyal, Shankha
    Banerjee, Archi
    Karmakar, Samir
    INTERSPEECH 2023, 2023, : 3392 - 3396
  • [32] Multimodal Fusion for Vocal Biomarkers Using Vector Cross-Attention
    Despotovic, Vladimir
    Elbeji, Abir
    Nazarov, Petr, V
    Fagherazzi, Guy
    INTERSPEECH 2024, 2024, : 1435 - 1439
  • [33] Multimodal Educational Data Fusion for Students' Mental Health Detection
    Guo, Teng
    Zhao, Wenhong
    Alrashoud, Mubarak
    Tolba, Amr
    Firmin, Selena
    Xia, Feng
    IEEE ACCESS, 2022, 10 : 70370 - 70382
  • [34] Joint probabilistic data fusion for pedestrian detection in multimodal images
    Shaikh, Zuhaib Ahmed
    Van Hamme, David
    Veelaert, Peter
    Philips, Wilfried
    2023 IEEE SENSORS, 2023,
  • [35] MFGAN: Multimodal Fusion for Industrial Anomaly Detection Using Attention-Based Autoencoder and Generative Adversarial Network
    Qu, Xinji
    Liu, Zhuo
    Wu, Chase Q.
    Hou, Aiqin
    Yin, Xiaoyan
    Chen, Zhulian
    SENSORS, 2024, 24 (02)
  • [36] Design of an Improved Model for Anomaly Detection in CCTV Systems Using Multimodal Fusion and Attention-Based Networks
    Srilakshmi, V.
    Veesam, Sai Babu
    Krishna, Mallu Shiva Rama
    Munaganuri, Ravi Kumar
    Sivaprasad, Dulam Devee
    IEEE ACCESS, 2025, 13 : 27287 - 27309
  • [37] Ground truth estimation using Multimodal data fusion
    Mohan, P.
    Ws, Ng
    Shi, D.
    2006 INTERNATIONAL CONFERENCE ON BIOMEDICAL AND PHARMACEUTICAL ENGINEERING, VOLS 1 AND 2, 2006, : 400 - +
  • [38] Multimodal biometric fusion using data quality information
    Wang, YC
    Casasent, D
    OPTICAL PATTERN RECOGNITION XVI, 2005, 5816 : 329 - 338
  • [39] Vehicle Tracking Using Surveillance With Multimodal Data Fusion
    Zhang, Yue
    Song, Bin
    Du, Xiaojiang
    Guizani, Mohsen
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (07) : 2353 - 2361
  • [40] Multimodal Fusion for Sensor Data using Stacked Autoencoders
    Zhang, Pengfei
    Ma, Xiaoping
    Zhang, Wenyu
    Lin, Shaowei
    Chen, Huilin
    Yirun, Arthur Lee
    Xiao, Gaoxi
    2015 IEEE TENTH INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP), 2015,