Automatic Modelling of Depressed Speech: Relevant Features and Relevance of Gender

被引:0
|
作者
Hoenig, Florian [1 ]
Batliner, Anton [1 ,2 ]
Noeth, Elmar [1 ,3 ]
Schnieder, Sebastian [4 ]
Krajewski, Jarek [4 ]
机构
[1] Friedrich Alexander Univ Erlangen Nurnberg, Pattern Recognit Lab, Erlangen, Germany
[2] Tech Univ Munich, Inst Human Machine Commun, Munich, Germany
[3] King Abdulaziz Univ, Elect & Comp Engn Dept, Jeddah, Saudi Arabia
[4] Univ Wuppertal, Expt Ind Psychol, Wuppertal, Germany
关键词
depression; acoustic features; brute forcing; interpretation; paralinguistics; CLASSIFICATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Depression is an affective disorder characterised by psychomotor retardation; in speech, this shows up in reduction of pitch (variation, range), loudness, and tempo, and in voice qualities different from those of typical modal speech. A similar reduction can be observed in sleepy speech (relaxation). In this paper, we employ a small group of acoustic features modelling prosody and spectrum that have been proven successful in the modelling of sleepy speech, enriched with voice quality features, for the modelling of depressed speech within a regression approach. This knowledge-based approach is complemented by and compared with brute-forcing and automatic feature selection. We further discuss gender differences and the contributions of (groups of) features both for the modelling of depression and across depression and sleepiness.
引用
收藏
页码:1248 / 1252
页数:5
相关论文
共 50 条
  • [1] Automatic relevance determination for the estimation of relevant features for object recognition
    Ulusoy, Ilkay
    Bishop, Christopher M.
    [J]. 2006 IEEE 14TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS, VOLS 1 AND 2, 2006, : 65 - +
  • [2] DEPRESSION SPEAKS: AUTOMATIC DISCRIMINATION BETWEEN DEPRESSED AND NON-DEPRESSED SPEAKERS BASED ON NONVERBAL SPEECH FEATURES
    Scibelli, F.
    Roffo, G.
    Tayarani, M.
    Bartoli, L.
    De Mattia, G.
    Esposito, A.
    Vinciarelli, A.
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6842 - 6846
  • [3] A Study of Acoustic Features for the Classification of Depressed Speech
    Lopez-Otero, Paula
    Docio-Fernandez, Laura
    Garcia-Mateo, Carmen
    [J]. 2014 37TH INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2014, : 1331 - 1335
  • [4] An Investigation of Depressed Speech Detection: Features and Normalization
    Cummins, Nicholas
    Epps, Julien
    Breakspear, Michael
    Goecke, Roland
    [J]. 12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 3008 - +
  • [5] Topological invariants as speech features for automatic speech recognition
    Kacur, Juraj
    Chudy, Vladimir
    [J]. INTERNATIONAL JOURNAL OF SIGNAL AND IMAGING SYSTEMS ENGINEERING, 2014, 7 (04) : 235 - 244
  • [6] Detecting Postpartum Depression in Depressed People by Speech Features
    Wang, Jingying
    Sui, Xiaoyun
    Hu, Bin
    Flint, Jonathan
    Bai, Shuotian
    Gao, Yuanbo
    Zhou, Yang
    Zhu, Tingshao
    [J]. HUMAN CENTERED COMPUTING, HCC 2017, 2018, 10745 : 433 - 442
  • [7] SNR Features for Automatic Speech Recognition
    Garner, Philip N.
    [J]. 2009 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION & UNDERSTANDING (ASRU 2009), 2009, : 182 - 187
  • [8] DESIGNING RELEVANT FEATURES FOR VISUAL SPEECH RECOGNITION
    Benhaim, Eric
    Sahbi, Hichem
    Vitte, Guillaume
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 2420 - 2424
  • [9] Automatic Speaker Identification Using Clinically Depressed Speech Content
    Memon, Sheeraz
    Shaikh, Faisal Karim
    Baloch, Javed Ali
    [J]. MEHRAN UNIVERSITY RESEARCH JOURNAL OF ENGINEERING AND TECHNOLOGY, 2012, 31 (02) : 259 - 264
  • [10] Automatic gender recognition in normal and pathological speech
    Gomez-Garcia, J. A.
    Godino-Llorente, J., I
    Castellanos-Dominguez, G.
    [J]. 14TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2013), VOLS 1-5, 2013, : 1706 - 1710