Deep Learning Based Video Spatio-Temporal Modeling for Emotion Recognition

被引:5
|
作者
Fonnegra, Ruben D. [1 ]
Diaz, Gloria M. [1 ]
机构
[1] Inst Tecnol Metropolitano, Medellin, Colombia
关键词
Deep learning; Facial emotion recognition; Spatio-temporal modeling; FACIAL EXPRESSION RECOGNITION; DESIGN; SYSTEM;
D O I
10.1007/978-3-319-91238-7_32
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Affective Computing is a growing research area, which aims to determine the emotional user states through their conscious and unconscious actions and use it to modify the machine interaction. This paper investigates the discriminative abilities of convolutional and recurrent neural networks to modeling spatio-temporal features from video sequences of the face region. In a deep learning architecture, dense convolutional layers are used for analyzing spatial information changes in frames during short time periods, while dense recurrent layers are used to model changes in frames as temporal sequences that change across the time. Those layers are then connected to a multilayer perceptron (MLP) to perform the classification task, which consists in to distinguish between six different emotion categories. The performance was twofold evaluated: gender independent and gender-dependent classifications. Experimental results show that the proposed approach achieves an accuracy of 81.84%, in the gender independent experiment, which outperforms previous works using the same experimental data. In the gender-dependent experiment, accuracy was 80.79% and 82.75% for male and female, respectively.
引用
收藏
页码:397 / 408
页数:12
相关论文
共 50 条
  • [1] Video-based driver emotion recognition using hybrid deep spatio-temporal feature learning
    Varma, Harshit
    Ganapathy, Nagarajan
    Deserno, Thomas M.
    [J]. MEDICAL IMAGING 2022: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, 2022, 12037
  • [2] Deep Spatio-Temporal Mutual Learning for EEG Emotion Recognition
    Ye, Wenqing
    Li, Xinyu
    Zhang, Haokun
    Zhu, Zhuolin
    Li, Dongdong
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [3] Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild
    Lu, Cheng
    Zheng, Wenming
    Li, Chaolong
    Tang, Chuangao
    Liu, Suyuan
    Yan, Simeng
    Zong, Yuan
    [J]. ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2018, : 646 - 652
  • [4] Deep spatio-temporal features for multimodal emotion recognition
    Nguyen, Dung
    Nguyen, Kien
    Sridharan, Sridha
    Ghasemi, Afsane
    Dean, David
    Fookes, Clinton
    [J]. 2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017), 2017, : 1215 - 1223
  • [5] Spatio-temporal deep forest for emotion recognition based on facial electromyography signals
    Xu, Muhua
    Cheng, Juan
    Li, Chang
    Liu, Yu
    Chen, Xun
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 156
  • [6] Video-based Emotion Recognition using Aggregated Features and Spatio-temporal Information
    Xu, Jinchang
    Dong, Yuan
    Ma, Lilei
    Bai, Hongliang
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2833 - 2838
  • [7] Action Recognition by Learning Deep Multi-Granular Spatio-Temporal Video Representation
    Li, Qing
    Qiu, Zhaofan
    Yao, Ting
    Mei, Tao
    Rui, Yong
    Luo, Jiebo
    [J]. ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, : 159 - 166
  • [8] Spatio-Temporal Information for Action Recognition in Thermal Video Using Deep Learning Model
    Srihari, P.
    Harikiran, J.
    [J]. INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2022, 13 (08) : 669 - 680
  • [9] Learning Deep Spatio-Temporal Dependence for Semantic Video Segmentation
    Qiu, Zhaofan
    Yao, Ting
    Mei, Tao
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (04) : 939 - 949
  • [10] Action Recognition Based on Efficient Deep Feature Learning in the Spatio-Temporal Domain
    Husain, Farzad
    Dellen, Babette
    Torras, Carme
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2016, 1 (02): : 984 - 991