A Systematic Review of Sensing and Differentiating Dichotomous Emotional States Using Audio-Visual Stimuli

被引:13
|
作者
Veeranki, Yedukondala Rao [1 ]
Kumar, Himanshu [1 ]
Ganapathy, Nagarajan [1 ,2 ,3 ]
Natarajan, Balasubramaniam [4 ]
Swaminathan, Ramakrishnan [1 ]
机构
[1] Indian Inst Technol Madras, Dept Appl Mech, Biomed Engn Grp, Chennai 600036, Tamil Nadu, India
[2] TU Braunschweig, Peter L Reichertz Inst Med Informat, D-38106 Braunschweig, Germany
[3] Hannover Med Sch, D-38106 Braunschweig, Germany
[4] Kansas State Univ, Mike Wiegers Dept Elect & Comp Engn, Manhattan, KS 66502 USA
关键词
Instruments; Physiology; Emotion recognition; Heart rate variability; Protocols; Electroencephalography; Electrocardiography; Audio-visual stimuli; classification; emotion recognition; happy; instrumentation; sad; ELECTRODERMAL ACTIVITY; PSYCHOPHYSIOLOGICAL RESPONSES; HAPPINESS; SADNESS; CLASSIFICATION; RECOGNITION; PHOTOPLETHYSMOGRAPHY; DECOMPOSITION; VALENCE; AROUSAL;
D O I
10.1109/ACCESS.2021.3110773
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recognition of dichotomous emotional states such as happy and sad play important roles in many aspects of human life. Existing literature has recorded diverse attempts in extracting physiological and non-physiological traits to record these emotional states. Selection of the right instrumental approach for measuring these traits plays a critical role in emotion recognition. Moreover, various stimuli have been used to induce emotions. Therefore, there is a current need to perform a comprehensive overview of instrumental approaches and their outcomes for the new generation of researchers. In this direction, this study surveys the instrumental approaches in discriminating happy and sad emotional states that are elicited using audio-visual stimuli. A comprehensive literature review is performed using PubMed, Scopus, and ACM digital library repositories. The reviewed articles are classified with respect to the i) stimulation modality, ii) acquisition protocol, iii) instrumentation approaches, iv) feature extraction, and v) classification methods. In total, 39 research articles were published on the selected topic of instrumental approaches in differentiating dichotomous emotional states using audio-visual stimuli between January 2011 and April 2021. The majority of the papers used physiological traits, namely electrocardiogram, electrodermal activity, heart rate variability, photoplethysmogram, and electroencephalogram based instrumental approaches for recognizing the emotional states. The results show that only a few articles have focused on audio-visual stimuli for the elicitation of happy and sad emotional states. This review is expected to seed research in the areas of standardization of protocols, enhancing the diagnostic relevance of these instruments, and extraction of more reliable biomarkers.
引用
收藏
页码:124434 / 124451
页数:18
相关论文
共 50 条
  • [41] Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli
    Kanaya, Shoko
    Yokosawa, Kazuhiko
    [J]. PSYCHONOMIC BULLETIN & REVIEW, 2011, 18 (01) : 123 - 128
  • [42] Audio-Visual Services in Colleges and Universities in the United States Report of a Survey by the ACRL Committee on Audio-Visual Work
    Bennett, Fleming
    [J]. COLLEGE & RESEARCH LIBRARIES, 1955, 16 (01): : 11 - 19
  • [43] Continuous sensing of gesture for control of audio-visual media
    Sha, XW
    Iachello, G
    Dow, S
    Serita, Y
    Jilien, TS
    Fistre, J
    [J]. SEVENTH IEEE INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, PROCEEDINGS, 2003, : 236 - 237
  • [45] Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies
    Chu, Eric
    Roy, Deb
    [J]. 2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2017, : 829 - 834
  • [46] Multimodal Learning Using 3D Audio-Visual Data or Audio-Visual Speech Recognition
    Su, Rongfeng
    Wang, Lan
    Liu, Xunying
    [J]. 2017 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2017, : 40 - 43
  • [47] Cortical integration of audio-visual speech and non-speech stimuli
    Wyk, Brent C. Vander
    Ramsay, Gordon J.
    Hudac, Caitlin M.
    Jones, Warren
    Lin, David
    Klin, Ami
    Lee, Su Mei
    Pelphrey, Kevin A.
    [J]. BRAIN AND COGNITION, 2010, 74 (02) : 97 - 106
  • [48] The response mechanism of the brain under different intervals of audio-visual stimuli
    Zhao, Liang
    Wang, Jie
    Wang, Lu
    An, Xingwei
    [J]. 2024 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND VIRTUAL ENVIRONMENTS FOR MEASUREMENT SYSTEMS AND APPLICATIONS, CIVEMSA 2024, 2024,
  • [49] MetaHug: Audio-Visual Stimuli Change Stress Buffer Effect of Hug
    Shiomi, Masahiro
    Hagita, Norihiro
    [J]. 2018 27TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2018), 2018, : 336 - 341
  • [50] The effect of auditory stimuli in audio-visual two-source integration
    Jia, Weixian
    Shi, Li
    [J]. 2019 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2019), VOL 1, 2019, : 145 - 148