Feature-level fusion approaches based on multimodal EEG data for depression recognition

被引:163
|
作者
Cai, Hanshu [1 ]
Qu, Zhidiao [1 ]
Li, Zhe [1 ]
Zhang, Yi [1 ]
Hu, Xiping [1 ,2 ]
Hu, Bin [1 ,3 ,4 ,5 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Gansu Prov Key Lab Wearable Comp, Lanzhou, Peoples R China
[2] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
[3] Lanzhou Univ, Joint Res Ctr Cognit Neurosensor Technol, Lanzhou, Peoples R China
[4] Chinese Acad Sci, Inst Semicond, Lanzhou, Peoples R China
[5] Lanzhou Univ, Minist Educ, Open Source Software & Real Time Syst, Lanzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Depression recognition; EEG; Multimodal; Audio stimulus; Fusion; BODY SENSOR NETWORKS; FEATURE-SELECTION; CLASSIFICATION; ASYMMETRY; THETA; FREQUENCY; DIAGNOSIS; ENTROPY; ALPHA; STATE;
D O I
10.1016/j.inffus.2020.01.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study aimed to construct a novel multimodal model by fusing different electroencephalogram (EEG) data sources, which were under neutral, negative and positive audio stimulation, to discriminate between depressed patients and normal controls. The EEG data of different modalities were fused using a feature-level fusion technique to construct a depression recognition model. The EEG signals of 86 depressed patients and 92 normal controls were recorded simultaneously while receiving different audio stimuli. Then, from the EEG signals of each modality, linear and nonlinear features were extracted and selected to obtain features of each modality. In addition, a linear combination technique was used to fuse the EEG features of different modalities to build a global feature vector and find several powerful features. Furthermore, genetic algorithms were used to perform feature weighting to improve the overall performance of the recognition framework. The classification accuracy of each classifier, namely the k-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM), was compared, and the results were encouraging. The highest classification accuracy of 86.98% was obtained by the KNN classifier in the fusion of positive and negative audio stimuli, demonstrating that the fusion modality could achieve higher depression recognition accuracy rate compared with the individual modality schemes. This study may provide an additional tool for identifying depression patients.
引用
收藏
页码:127 / 138
页数:12
相关论文
共 50 条
  • [1] Feature-level Fusion for Depression Recognition Based on fNIRS Data
    Zheng, Shuzhen
    Lei, Chang
    Wang, Tao
    Wu, Chunyun
    Sun, Jieqiong
    Peng, Hong
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2898 - 2905
  • [2] Feature-level Fusion for Depression Recognition Based on fNIRS Data
    Zheng, Shuzhen
    Lei, Chang
    Wang, Tao
    Wu, Chunyun
    Sun, Jieqiong
    Peng, Hong
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 2906 - 2913
  • [3] Feature-level fusion based on spatial-temporal of pervasive EEG for depression recognition
    Zhang, Bingtao
    Wei, Dan
    Yan, Guanghui
    Lei, Tao
    Cai, Haishu
    Yang, Zhifei
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2022, 226
  • [4] Feature-Level Fusion of Multimodal Physiological Signals for Emotion Recognition
    Chen, Jing
    Ru, Bin
    Xu, Lixin
    Moore, Philip
    Su, Yun
    [J]. PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2015, : 395 - 399
  • [5] Action Recognition Based on Feature-level Fusion
    Cheng, Wanli
    Chen, Enqing
    [J]. TENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2018), 2018, 10806
  • [6] Feature-level data fusion for bimodal person recognition
    Chibelushi, CC
    Mason, JSD
    Deravi, F
    [J]. SIXTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND ITS APPLICATIONS, VOL 1, 1997, (443): : 399 - 403
  • [7] Multimodal Emotion Recognition Framework Using a Decision-Level Fusion and Feature-Level Fusion Approach
    Devi, C. Akalya
    Renuka, D.
    [J]. IETE JOURNAL OF RESEARCH, 2023, 69 (12) : 8909 - 8920
  • [8] Feature-level fusion method based on KFDA for multimodal recognition fusing ear and profile face
    Xu, Xiao-Na
    Mu, Zhi-Chun
    Yuan, Li
    [J]. 2007 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION, VOLS 1-4, PROCEEDINGS, 2007, : 1306 - 1310
  • [9] A Feature-Level Fusion Scheme Based on Eigen Theory for Multimodal Biometrics
    Chen, Wen-Shiung
    Jeng, Ren-He
    Chen, Yen-Feng
    [J]. IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India), 2022, 39 (05): : 1081 - 1091
  • [10] A Feature-Level Fusion Scheme Based on Eigen Theory for Multimodal Biometrics
    Chen, Wen-Shiung
    Jeng, Ren-He
    Chen, Yen-Feng
    [J]. IETE TECHNICAL REVIEW, 2022, 39 (05) : 1081 - 1091