Context-Sensitive Prediction of Facial Expressivity using Multimodal Hierarchical Bayesian Neural Networks

被引:9
|
作者
Joshi, Ajjen [1 ]
Ghosh, Soumya [2 ]
Gunnery, Sarah [3 ]
Tickle-Degnen, Linda [3 ]
Sclaroff, Stan [1 ]
Betke, Margrit [1 ]
机构
[1] Boston Univ, Dept Comp Sci, 111 Cummington St, Boston, MA 02215 USA
[2] IBM Res, Cambridge, MA USA
[3] Tufts Univ, Dept Occupat Therapy, Medford, MA 02155 USA
来源
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018) | 2018年
基金
美国国家科学基金会;
关键词
PARKINSONS-DISEASE;
D O I
10.1109/FG.2018.00048
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Objective automated affect analysis systems can be applied to quantify the progression of symptoms in neurodegenerative diseases such as Parkinson's Disease (PD). PD hampers the ability of patients to emote by decreasing the mobility of their facial musculature, a phenomenon known as "facial masking." In this work, we focus on building a system that can predict an accurate score of active facial expressivity in people suffering from Parkinson's disease using features extracted from both video and audio. An ideal automated system should be able to mimic the ability of human experts to take into account contextual information while making these predictions. For example, patients exhibit different emotions with varying intensities when speaking about positive and negative experiences. We utilize a hierarchical Bayesian neural network framework to enable the learning of model parameters that subtly adapt to pre-defined notions of context, such as the gender of the patient or the valence of the expressed sentiment. We evaluate our formulation on a dataset of 772 20-second video clips of Parkinson's disease patients and demonstrate that training a context-specific hierarchical Bayesian framework yields an improvement in model performance in both multiclass classification and regression settings compared to baseline models trained on all data pooled together.
引用
收藏
页码:278 / 285
页数:8
相关论文
共 50 条
  • [1] Inductive transfer with context-sensitive neural networks
    Daniel L. Silver
    Ryan Poirier
    Duane Currie
    Machine Learning, 2008, 73 : 313 - 336
  • [2] Inductive transfer with context-sensitive neural networks
    Silver, Daniel L.
    Poirier, Ryan
    Currie, Duane
    MACHINE LEARNING, 2008, 73 (03) : 313 - 336
  • [3] Context-sensitive data integration and prediction of biological networks
    Myers, Chad L.
    Troyanskaya, Olga G.
    BIOINFORMATICS, 2007, 23 (17) : 2322 - 2330
  • [4] Context-sensitive help for Multimodal dialogue
    Hastie, HW
    Johnston, M
    Ehlen, P
    FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, 2002, : 93 - 98
  • [5] Context-free and context-sensitive dynamics in recurrent neural networks
    Bodén, M
    Wiles, J
    CONNECTION SCIENCE, 2000, 12 (3-4) : 197 - 210
  • [6] Context-Sensitive Multimodal Emotion Recognition from Speech and Facial Expression using Bidirectional LSTM Modeling
    Woellmer, Martin
    Metallinou, Angeliki
    Eyben, Florian
    Schuller, Bjoern
    Narayanan, Shrikanth
    11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 3 AND 4, 2010, : 2362 - +
  • [7] A Bayesian model of context-sensitive value attribution
    Rigoli, Francesco
    Friston, Karl J.
    Martinelli, Cristina
    Selakovic, Mirjana
    Shergill, Sukhwinder S.
    Dolan, Raymond J.
    ELIFE, 2016, 5
  • [8] Bayesian metanetwork for context-sensitive feature relevance
    Terziyan, Vagan
    ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2006, 3955 : 356 - 366
  • [9] Context-sensitive weights for a neural network
    Arritt, RP
    Turner, RM
    MODELING AND USING CONTEXT, PROCEEDINGS, 2003, 2680 : 29 - 39
  • [10] Context-Sensitive Prediction of Vessel Behavior
    Steidel, Matthias
    Mentjes, Jan
    Hahn, Axel
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2020, 8 (12) : 1 - 20