A Preliminary Study of Classifying Spoken Vowels with EEG Signals

被引:5
|
作者
Li, Mingtao [1 ]
Pun, Sio Hang [2 ]
Chen, Fei [1 ]
机构
[1] Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Peoples R China
[2] Univ Macau, State Key Lab Analog & Mixed Signal VLSI, Taipa 999078, Macao, Peoples R China
关键词
SPEECH SYNTHESIS; COMMUNICATION; CONSONANTS;
D O I
10.1109/NER49283.2021.9441414
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The task of classifying vowels via brain activities has been studied in many ideal direct-speech brain-computer interfaces (DS-BCIs). The vowels in those studies usually had clear acoustic differences, mainly on the first and second formants (i.e., F1 and F2). Whereas recent studies found that those speech features were difficult to be presented in DS-BCIs based on imagined speech, the spoken speech with audible output has the potential to provide insight regarding the relationship between spoken vowels' classification accuracies and their acoustic differences. This work aimed to classify four spoken Mandarin vowels (i.e., /a/, /u/, /i/ and /u/, and pronounced with different consonants and tones to form monosyllabic stimuli in Mandarin Chinese) by using electroencephalogram (EEG) signals. The F1 and F2 of each spoken vowel were extracted; the corresponding spoken EEG signals were analyzed with the Riemannian manifold method and further used to classify spoken vowels with a linear discriminant classifier. The acoustic analysis showed that in the F1-F2 plane, the /u/ ellipse was closed to the /u/ and /i/ ellipses. The classification results showed that vowels /a/, /u/ and /i/ were well classified (82.0%, 69.5% and 68.2%, respectively), but vowel /u/ was more easily classified into /u/, /i/ and /u/. Results in this work suggested that the spoken vowels with similar formant structures were difficult to be classified by using their spoken EEG signals.
引用
收藏
页码:13 / 16
页数:4
相关论文
共 50 条
  • [31] Classifying EEG Signals of Mind-Wandering Across Different Styles of Meditation
    Chaudhary, Shivam
    Pandey, Pankaj
    Miyapuram, Krishna Prasad
    Lomas, Derek
    BRAIN INFORMATICS (BI 2022), 2022, 13406 : 152 - 163
  • [32] Metric Multidimensional Scaling and Aggregation Operators for Classifying Epilepsy from EEG Signals
    Rajaguru, Harikumar
    Prabhakar, Sunil Kumar
    2017 INTERNATIONAL CONFERENCE OF ELECTRONICS, COMMUNICATION AND AEROSPACE TECHNOLOGY (ICECA), VOL 1, 2017, : 567 - 570
  • [33] Sonification and textification: Proposing methods for classifying unspoken words from EEG signals
    Gonzalez-Castaneda, Erick F.
    Torres-Garcia, Alejandro A.
    Reyes-Garcia, Carlos A.
    Villasenor-Pineda, Luis
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2017, 37 : 82 - 91
  • [34] ADJACENT VOWELS IN THE SPOKEN SPANISH OF MISIONES, ARGENTINA
    SANICKY, CA
    HISPANIA-A JOURNAL DEVOTED TO THE TEACHING OF SPANISH AND PORTUGUESE, 1989, 72 (03): : 700 - 704
  • [35] Classifying driving fatigue based on combined entropy measure using EEG signals
    Gao, Junfeng (junfengmst@163.com), 1600, Science and Engineering Research Support Society (09):
  • [36] A novel module based approach for classifying epileptic seizures using EEG signals
    Sood, Meenakshi
    Bhooshan, Sunil V.
    2016 INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS AND COMPUTER SYSTEMS (CIICS), 2016,
  • [37] The phonological system of vowels in the French spoken in Quebec
    Martin, P
    LINGUISTIQUE, 1998, 34 (02): : 67 - 76
  • [38] Formant Based Analysis of Spoken Arabic Vowels
    Alotaibi, Yousef Ajami
    Husain, Amir
    BIOMETRIC ID MANAGEMENT AND MULTIMODAL COMMUNICATION, PROCEEDINGS, 2009, 5707 : 162 - +
  • [39] CLASSIFICATION OF RUSSIAN VOWELS SPOKEN BY DIFFERENT SPEAKERS
    LOBANOV, BM
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1971, 49 (02): : 606 - &
  • [40] FORMANT STRUCTURE AND ARTICULATION OF SPOKEN AND SUNG VOWELS
    SUNDBERG, J
    FOLIA PHONIATRICA, 1970, 22 (01): : 28 - 48