Towards Neural Decoding of Imagined Speech based on Spoken Speech

被引:1
|
作者
Lee, Seo-Hyun [1 ]
Lee, Young-Eun [1 ]
Kim, Soowon [2 ]
Ko, Byung-Kwan [2 ]
Lee, Seong-Whan [2 ]
机构
[1] Korea Univ, Dept Brain & Cognit Engn, Seoul, South Korea
[2] Korea Univ, Dept Artificial Intelligence, Seoul, South Korea
关键词
brain-computer interface; imagined speech; speech recognition; spoken speech; visual imagery; VISUAL-IMAGERY;
D O I
10.1109/BCI57258.2023.10078707
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Decoding imagined speech from human brain signals is a challenging and important issue that may enable human communication via brain signals. While imagined speech can be the paradigm for silent communication via brain signals, it is always hard to collect enough stable data to train the decoding model. Meanwhile, spoken speech data is relatively easy and to obtain, implying the significance of utilizing spoken speech brain signals to decode imagined speech. In this paper, we performed a preliminary analysis to find out whether if it would be possible to utilize spoken speech electroencephalography data to decode imagined speech, by simply applying the pre-trained model trained with spoken speech brain signals to decode imagined speech. While the classification performance of imagined speech data solely used to train and validation was 30.5 +/- 4.9 %, the transferred performance of spoken speech based classifier to imagined speech data displayed average accuracy of 26.8 +/- 2.0 % which did not have statistically significant difference compared to the imagined speech based classifier (p = 0.0983, chi-square = 4.64). For more comprehensive analysis, we compared the result with the visual imagery dataset, which would naturally be less related to spoken speech compared to the imagined speech. As a result, visual imagery have shown solely trained performance of 31.8 +/- 4.1 % and transferred performance of 26.3 +/- 2.4 % which had shown statistically significant difference between each other (p = 0.022, chi-square = 7.64). Our results imply the potential of applying spoken speech to decode imagined speech, as well as their underlying common features.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] Classification of Imagined Speech Using Siamese Neural Network
    Lee, Dong-Yeon
    Lee, Minji
    Lee, Seong-Whan
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 2979 - 2984
  • [22] Investigating the neural correlates of imagined speech: An EEG-based connectivity analysis
    Bakhshali, Mohamad Amin
    Khademi, Morteza
    Ebrahimi-Moghadam, Abbas
    DIGITAL SIGNAL PROCESSING, 2022, 123
  • [23] Neural Decoding of Spontaneous Overt and Intended Speech
    Dash, Debadatta
    Ferrari, Paul
    Wang, Jun
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2024, 67 (11): : 4216 - 4225
  • [24] MEG Sensor Selection for Neural Speech Decoding
    Dash, Debadatta
    Wisler, Alan
    Ferrari, Paul
    Davenport, Elizabeth Moody
    Maldjian, Joseph
    Wang, Jun
    IEEE ACCESS, 2020, 8 : 182320 - 182337
  • [25] Magnetometers vs Gradiometers for Neural Speech Decoding
    Dash, Debadatta
    Ferrari, Paul
    Babajani-Feremi, Abbas
    Borna, Amir
    Schwindt, Peter D. D.
    Wang, Jun
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 6543 - 6546
  • [26] Neural Speech Decoding for Amyotrophic Lateral Sclerosis
    Dash, Debadatta
    Ferrari, Paul
    Hernandez, Angel
    Heitzman, Daragh
    Austin, Sara G.
    Wang, Jun
    INTERSPEECH 2020, 2020, : 2782 - 2786
  • [27] A Probabilistic Decoding Approach to a Neural Prosthesis for Speech
    Matthews, Brett
    Kim, Jonathan
    Brumberg, Jonathan S.
    Clements, Mark
    2010 4TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICAL ENGINEERING (ICBBE 2010), 2010,
  • [28] Decoding imagined, heard, and spoken speech: classification and regression of EEG using a 14-channel dry-contact mobile headset
    Clayton, Jonathan
    Wellington, Scott
    Valentini-Botinhaou, Cassia
    Watts, Oliver
    INTERSPEECH 2020, 2020, : 4886 - 4890
  • [29] A neural mechanism for recognizing speech spoken by different speakers
    Kreitewolf, Jens
    Gaudrain, Etienne
    von Kriegstein, Katharina
    NEUROIMAGE, 2014, 91 : 375 - 385
  • [30] Towards Voice Reconstruction from EEG during Imagined Speech
    Lee, Young-Eun
    Lee, Seo-Hyun
    Kim, Sang-Ho
    Lee, Seong-Whan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 5, 2023, : 6030 - 6038