Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning

被引:35
|
作者
Bahreini, Kiavash [1 ]
Nadolski, Rob [1 ]
Westera, Wim [1 ]
机构
[1] Open Univ Netherlands, Fac Psychol & Educ Sci, Res Ctr Learning Teaching & Technol, Welten Inst, Valkenburgerweg 177, NL-6419 AT Heerlen, Netherlands
关键词
FACIAL EXPRESSION; IMPACT; AUDIO;
D O I
10.1080/10447318.2016.1159799
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real-time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on the intended emotions they show and which is also useful to increase learners' awareness of their own behavior. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons' behavior was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.
引用
收藏
页码:415 / 430
页数:16
相关论文
共 50 条
  • [1] Towards real-time speech emotion recognition for affective e-learning
    Bahreini K.
    Nadolski R.
    Westera W.
    [J]. Education and Information Technologies, 2016, 21 (5) : 1367 - 1386
  • [2] Real-time music emotion recognition based on multimodal fusion
    Hao, Xingye
    Li, Honghe
    Wen, Yonggang
    [J]. Alexandria Engineering Journal, 2025, 116 : 586 - 600
  • [3] Towards multimodal emotion recognition in e-learning environments
    Bahreini, Kiavash
    Nadolski, Rob
    Westera, Wim
    [J]. INTERACTIVE LEARNING ENVIRONMENTS, 2016, 24 (03) : 590 - 605
  • [4] Multimodal Attentive Learning for Real-time Explainable Emotion Recognition in Conversations
    Arumugam, Balaji
    Das Bhattacharjee, Sreyasee
    Yuan, Junsong
    [J]. 2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 1210 - 1214
  • [5] Real-Time Emotion Classification Using EEG Data Stream in E-Learning Contexts
    Nandi, Arijit
    Xhafa, Fatos
    Subirats, Laia
    Fort, Santi
    [J]. SENSORS, 2021, 21 (05) : 1 - 26
  • [6] Character agents in e-learning interface using multimodal real-time interaction
    Wang, Hua
    Yang, He
    Chignell, Mark
    Ishizuka, Mitsuru
    [J]. HUMAN-COMPUTER INTERACTION, PT 3, PROCEEDINGS, 2007, 4552 : 225 - +
  • [7] Deep CNN with late fusion for real time multimodal emotion recognition
    Dixit, Chhavi
    Satapathy, Shashank Mouli
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240
  • [8] Real-time learning behavior mining for e-learning
    Kuo, YH
    Chen, JN
    Jeng, YL
    Huang, YM
    [J]. 2005 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE, PROCEEDINGS, 2005, : 653 - 656
  • [9] Emotion Recognition in E-learning Systems
    El Hammoumi, Oussama
    Benmarrakchi, Fatimaezzahra
    Ouherrou, Nihal
    El Kafi, Jamal
    El Hore, Ali
    [J]. PROCEEDINGS OF 2018 6TH INTERNATIONAL CONFERENCE ON MULTIMEDIA COMPUTING AND SYSTEMS (ICMCS), 2018, : 52 - 57
  • [10] A Real-time Multimodal Intelligent Tutoring Emotion Recognition System (MITERS)
    Khediri, Nouha
    Ben Ammar, Mohamed
    Kherallah, Monji
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (19) : 57759 - 57783