Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review

被引:301
|
作者
Zhang, Jianhua [1 ]
Yin, Zhong [2 ]
Chen, Peng [3 ]
Nichele, Stefano [1 ]
机构
[1] Oslo Metropolitan Univ, Dept Comp Sci, Oslo, Norway
[2] Univ Shanghai Sci & Technol, Dept Control Sci & Engn, Shanghai, Peoples R China
[3] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Emotion recognition; Affective computing; Physiological signals; Feature dimensionality reduction; Data fusion; Machine learning; Deep learning; EEG SIGNAL CLASSIFICATION; NEURAL-NETWORK; FEATURE-EXTRACTION; FEATURE-SELECTION; MENTAL WORKLOAD; APPROXIMATE ENTROPY; FACIAL EXPRESSIONS; AUTOMATIC-ANALYSIS; SENTIMENT ANALYSIS; MIXTURE MODEL;
D O I
10.1016/j.inffus.2020.01.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the rapid advances in machine learning (ML) and information fusion has made it possible to endow machines/computers with the ability of emotion understanding, recognition, and analysis. Emotion recognition has attracted increasingly intense interest from researchers from diverse fields. Human emotions can be recognized from facial expressions, speech, behavior (gesture/posture) or physiological signals. However, the first three methods can be ineffective since humans may involuntarily or deliberately conceal their real emotions (so-called social masking). The use of physiological signals can lead to more objective and reliable emotion recognition. Compared with peripheral neurophysiological signals, electroencephalogram (EEG) signals respond to fluctuations of affective states more sensitively and in real time and thus can provide useful features of emotional states. Therefore, various EEG-based emotion recognition techniques have been developed recently. In this paper, the emotion recognition methods based on multi-channel EEG signals as well as multi-modal physiological signals are reviewed. According to the standard pipeline for emotion recognition, we review different feature extraction (e.g., wavelet transform and nonlinear dynamics), feature reduction, and ML classifier design methods (e.g., k-nearest neighbor (KNN), naive Bayesian (NB), support vector machine (SVM) and random forest (RF)). Furthermore, the EEG rhythms that are highly correlated with emotions are analyzed and the correlation between different brain areas and emotions is discussed. Finally, we compare different ML and deep learning algorithms for emotion recognition and suggest several open problems and future research directions in this exciting and fast-growing area of AI.
引用
收藏
页码:103 / 126
页数:24
相关论文
共 50 条
  • [1] Multi-modal embeddings using multi-task learning for emotion recognition
    Khare, Aparna
    Parthasarathy, Srinivas
    Sundaram, Shiva
    [J]. INTERSPEECH 2020, 2020, : 384 - 388
  • [2] A Multi-Modal Deep Learning Approach for Emotion Recognition
    Shahzad, H. M.
    Bhatti, Sohail Masood
    Jaffar, Arfan
    Rashid, Muhammad
    [J]. INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (02): : 1561 - 1570
  • [3] Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion
    Younis, Eman M. G.
    Zaki, Someya Mohsen
    Kanjo, Eiman
    Houssein, Essam H.
    [J]. SENSORS, 2022, 22 (15)
  • [4] Facial emotion recognition using multi-modal information
    De Silva, LC
    Miyasato, T
    Nakatsu, R
    [J]. ICICS - PROCEEDINGS OF 1997 INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING, VOLS 1-3: THEME: TRENDS IN INFORMATION SYSTEMS ENGINEERING AND WIRELESS MULTIMEDIA COMMUNICATIONS, 1997, : 397 - 401
  • [5] AFLEMP: Attention-based Federated Learning for Emotion recognition using Multi-modal Physiological data
    Gahlan, Neha
    Sethia, Divyashikha
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 94
  • [6] A multi-modal deep learning system for Arabic emotion recognition
    Abu Shaqra F.
    Duwairi R.
    Al-Ayyoub M.
    [J]. International Journal of Speech Technology, 2023, 26 (01) : 123 - 139
  • [7] Surface Material Recognition Using Active Multi-modal Extreme Learning Machine
    Liu, Huaping
    Fang, Jing
    Xu, Xinying
    Sun, Fuchun
    [J]. COGNITIVE COMPUTATION, 2018, 10 (06) : 937 - 950
  • [8] Robotic grasping recognition using multi-modal deep extreme learning machine
    Wei, Jie
    Liu, Huaping
    Yan, Gaowei
    Sun, Fuchun
    [J]. MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, 2017, 28 (03) : 817 - 833
  • [9] Robotic grasping recognition using multi-modal deep extreme learning machine
    Jie Wei
    Huaping Liu
    Gaowei Yan
    Fuchun Sun
    [J]. Multidimensional Systems and Signal Processing, 2017, 28 : 817 - 833
  • [10] Multi-modal emotion recognition using recurrence plots and transfer learning on physiological signals
    Elalamy, Rayan
    Fanourakis, Marios
    Chanel, Guillaume
    [J]. 2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,