Construction and improvement of English vocabulary learning model integrating spiking neural network and convolutional long short-term memory algorithm

被引:2
|
作者
Wang, Yunxia [1 ]
机构
[1] Nanyang Med Coll, Nanyang, Henan, Peoples R China
来源
PLOS ONE | 2024年 / 19卷 / 03期
关键词
CNN-LSTM; CLASSIFICATION; CONVLSTM;
D O I
10.1371/journal.pone.0299425
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
To help non-native English speakers quickly master English vocabulary, and improve reading, writing, listening and speaking skills, and communication skills, this study designs, constructs, and improves an English vocabulary learning model that integrates Spiking Neural Network (SNN) and Convolutional Long Short-Term Memory (Conv LSTM) algorithms. The fusion of SNN and Conv LSTM algorithm can fully utilize the advantages of SNN in processing temporal information and Conv LSTM in sequence data modeling, and implement a fusion model that performs well in English vocabulary learning. By adding information transfer and interaction modules, the feature learning and the timing information processing are optimized to improve the vocabulary learning ability of the model in different text contents. The training set used in this study is an open data set from the WordNet and Oxford English Corpus data corpora. The model is presented as a computer program and applied to an English learning application program, an online vocabulary learning platform, or a language education software. The experiment will use the open data set to generate a test set with text volume ranging from 100 to 4000. The performance indicators of the proposed fusion model are compared with those of five traditional models and applied to the latest vocabulary exercises. From the perspective of learners, 10 kinds of model accuracy, loss, polysemy processing accuracy, training time, syntactic structure capturing accuracy, vocabulary coverage, F1-score, context understanding accuracy, word sense disambiguation accuracy, and word order relation processing accuracy are considered. The experimental results reveal that the performance of the fusion model is better under different text sizes. In the range of 100-400 text volume, the accuracy is 0.75-0.77, the loss is less than 0.45, the F1-score is greater than 0.75, the training time is within 300s, and the other performance indicators are more than 65%; In the range of 500-1000 text volume, the accuracy is 0.81-0.83, the loss is not more than 0.40, the F1-score is not less than 0.78, the training time is within 400s, and the other performance indicators are above 70%; In the range of 1500-3000 text volume, the accuracy is 0.82-0.84, the loss is less than 0.28, the F1-score is not less than 0.78, the training time is within 600s, and the remaining performance indicators are higher than 70%. The fusion model can adapt to various types of questions in practical application. After the evaluation of professional teachers, the average scores of the choice, filling-in-the-blank, spelling, matching, exercises, and synonyms are 85.72, 89.45, 80.31, 92.15, 87.62, and 78.94, which are much higher than other traditional models. This shows that as text volume increases, the performance of the fusion model is gradually improved, indicating higher accuracy and lower loss. At the same time, in practical application, the fusion model proposed in this study has a good effect on English learning tasks and offers greater benefits for people unfamiliar with English vocabulary structure, grammar, and question types. This study aims to provide efficient and accurate natural language processing tools to help non-native English speakers understand and apply language more easily, and improve English vocabulary learning and comprehension.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Aviation visibility forecasting by integrating Convolutional Neural Network and long short-term memory network
    Chen, Chuen-Jyh
    Huang, Chieh-Ni
    Yang, Shih-Ming
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (03) : 5007 - 5020
  • [2] A Deep Neural Network Model for Short-Term Load Forecast Based on Long Short-Term Memory Network and Convolutional Neural Network
    Tian, Chujie
    Ma, Jian
    Zhang, Chunhong
    Zhan, Panpan
    ENERGIES, 2018, 11 (12)
  • [3] A Convolutional Long Short-Term Memory Neural Network Based Prediction Model
    Tian, Y. H.
    Wu, Q.
    Zhang, Y.
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2020, 15 (05) : 1 - 12
  • [4] Deep Learning with Convolutional Neural Network and Long Short-Term Memory for Phishing Detection
    Adebowale, M. A.
    Lwin, K. T.
    Hossain, M. A.
    2019 13TH INTERNATIONAL CONFERENCE ON SOFTWARE, KNOWLEDGE, INFORMATION MANAGEMENT AND APPLICATIONS (SKIMA), 2019,
  • [5] Production Forecasting with the Interwell Interference by Integrating Graph Convolutional and Long Short-Term Memory Neural Network
    Du, Enda
    Liu, Yuetian
    Cheng, Ziyan
    Xue, Liang
    Ma, Jing
    He, Xuan
    SPE RESERVOIR EVALUATION & ENGINEERING, 2022, 25 (02) : 197 - 213
  • [6] Fault diagnosis algorithm of electric vehicle based on convolutional neural network and long short-term memory neural network
    Li, Xiaojie
    Zhang, Yang
    Wang, Haolin
    Zhao, Heming
    Cui, Xueliang
    Yue, Xikai
    Ma, Zilin
    INTERNATIONAL JOURNAL OF GREEN ENERGY, 2024, 21 (16) : 3638 - 3653
  • [7] Short-Term Passenger Flow Prediction Using a Bus Network Graph Convolutional Long Short-Term Memory Neural Network Model
    Baghbani, Asiye
    Bouguila, Nizar
    Patterson, Zachary
    TRANSPORTATION RESEARCH RECORD, 2023, 2677 (02) : 1331 - 1340
  • [8] A SPIKING NETWORK MODEL OF SHORT-TERM ACTIVE MEMORY
    ZIPSER, D
    KEHOE, B
    LITTLEWORT, G
    FUSTER, J
    JOURNAL OF NEUROSCIENCE, 1993, 13 (08): : 3406 - 3420
  • [9] Convolutional long short-term memory neural network for groundwater change prediction
    Patra, Sumriti Ranjan
    Chu, Hone-Jay
    FRONTIERS IN WATER, 2024, 6
  • [10] A hybrid convolutional neural network with long short-term memory for statistical arbitrage
    Eggebrecht, P.
    Luetkebohmert, E.
    QUANTITATIVE FINANCE, 2023, 23 (04) : 595 - 613