Construction and improvement of English vocabulary learning model integrating spiking neural network and convolutional long short-term memory algorithm

被引:2
|
作者
Wang, Yunxia [1 ]
机构
[1] Nanyang Med Coll, Nanyang, Henan, Peoples R China
来源
PLOS ONE | 2024年 / 19卷 / 03期
关键词
CNN-LSTM; CLASSIFICATION; CONVLSTM;
D O I
10.1371/journal.pone.0299425
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
To help non-native English speakers quickly master English vocabulary, and improve reading, writing, listening and speaking skills, and communication skills, this study designs, constructs, and improves an English vocabulary learning model that integrates Spiking Neural Network (SNN) and Convolutional Long Short-Term Memory (Conv LSTM) algorithms. The fusion of SNN and Conv LSTM algorithm can fully utilize the advantages of SNN in processing temporal information and Conv LSTM in sequence data modeling, and implement a fusion model that performs well in English vocabulary learning. By adding information transfer and interaction modules, the feature learning and the timing information processing are optimized to improve the vocabulary learning ability of the model in different text contents. The training set used in this study is an open data set from the WordNet and Oxford English Corpus data corpora. The model is presented as a computer program and applied to an English learning application program, an online vocabulary learning platform, or a language education software. The experiment will use the open data set to generate a test set with text volume ranging from 100 to 4000. The performance indicators of the proposed fusion model are compared with those of five traditional models and applied to the latest vocabulary exercises. From the perspective of learners, 10 kinds of model accuracy, loss, polysemy processing accuracy, training time, syntactic structure capturing accuracy, vocabulary coverage, F1-score, context understanding accuracy, word sense disambiguation accuracy, and word order relation processing accuracy are considered. The experimental results reveal that the performance of the fusion model is better under different text sizes. In the range of 100-400 text volume, the accuracy is 0.75-0.77, the loss is less than 0.45, the F1-score is greater than 0.75, the training time is within 300s, and the other performance indicators are more than 65%; In the range of 500-1000 text volume, the accuracy is 0.81-0.83, the loss is not more than 0.40, the F1-score is not less than 0.78, the training time is within 400s, and the other performance indicators are above 70%; In the range of 1500-3000 text volume, the accuracy is 0.82-0.84, the loss is less than 0.28, the F1-score is not less than 0.78, the training time is within 600s, and the remaining performance indicators are higher than 70%. The fusion model can adapt to various types of questions in practical application. After the evaluation of professional teachers, the average scores of the choice, filling-in-the-blank, spelling, matching, exercises, and synonyms are 85.72, 89.45, 80.31, 92.15, 87.62, and 78.94, which are much higher than other traditional models. This shows that as text volume increases, the performance of the fusion model is gradually improved, indicating higher accuracy and lower loss. At the same time, in practical application, the fusion model proposed in this study has a good effect on English learning tasks and offers greater benefits for people unfamiliar with English vocabulary structure, grammar, and question types. This study aims to provide efficient and accurate natural language processing tools to help non-native English speakers understand and apply language more easily, and improve English vocabulary learning and comprehension.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Denoising odontocete echolocation clicks using a hybrid model with convolutional neural network and long short-term memory network
    Yang, Wuyi
    Chang, Wenlei
    Song, Zhongchang
    Niu, Fuqiang
    Wang, Xianyan
    Zhang, Yu
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2023, 154 (02): : 938 - 947
  • [22] Denoising odontocete echolocation clicks using a hybrid model with convolutional neural network and long short-term memory network
    Yang, Wuyi
    Chang, Wenlei
    Song, Zhongchang
    Niu, Fuqiang
    Wang, Xianyan
    Zhang, Yu
    Journal of the Acoustical Society of America, 2023, 154 (02): : 938 - 947
  • [23] Sentiment analysis of tweets using a unified convolutional neural network-long short-term memory network model
    Umer, Muhammad
    Ashraf, Imran
    Mehmood, Arif
    Kumari, Saru
    Ullah, Saleem
    Sang Choi, Gyu
    COMPUTATIONAL INTELLIGENCE, 2021, 37 (01) : 409 - 434
  • [24] CLSTM-SNP: Convolutional Neural Network to Enhance Spiking Neural P Systems for Named Entity Recognition Based on Long Short-Term Memory Network
    Deng, Qin
    Chen, Xiaoliang
    Yang, Zaiyan
    Li, Xianyong
    Du, Yajun
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [25] CLSTM-SNP: Convolutional Neural Network to Enhance Spiking Neural P Systems for Named Entity Recognition Based on Long Short-Term Memory Network
    Qin Deng
    Xiaoliang Chen
    Zaiyan Yang
    Xianyong Li
    Yajun Du
    Neural Processing Letters, 56
  • [26] Long- and short-term history effects in a spiking network model of statistical learning
    Maes, Amadeus
    Barahona, Mauricio
    Clopath, Claudia
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [27] Long- and short-term history effects in a spiking network model of statistical learning
    Amadeus Maes
    Mauricio Barahona
    Claudia Clopath
    Scientific Reports, 13
  • [28] A novel hybrid model by using convolutional neural network and long short-term memory for text sentiment analysis
    Ma, Xiaohui
    DYNA, 2020, 95 (05): : 527 - 533
  • [29] A Novel Recursive Model Based on a Convolutional Long Short-Term Memory Neural Network for Air Pollution Prediction
    Wang, Weilin
    Mao, Wenjing
    Tong, Xueli
    Xu, Gang
    REMOTE SENSING, 2021, 13 (07)
  • [30] OCLSTM: Optimized convolutional and long short-term memory neural network model for protein secondary structure prediction
    Zhao, Yawu
    Liu, Yihui
    PLOS ONE, 2021, 16 (02):