Speech-Music Classification Model Based on Improved Neural Network and Beat Spectrum

被引:0
|
作者
Huang, Chun [1 ]
Wei, HeFu [2 ]
机构
[1] Gen Educ & Int Coll, Chongqing Coll Elect Engn, Chongqing 400031, Peoples R China
[2] Arts Coll Sichuan Univ, Chengdu 401331, Peoples R China
关键词
Vocal music; classification model; beat spectrum; feature parameter extraction; cosine similarity; convolutional neural network; FREQUENCY;
D O I
10.14569/IJACSA.2023.0140706
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A speech-music classification method according to a developed neural system and beat spectrum is proposed to achieve accurate classification of speech-music through preemphasis, endpoint detection, framing, windowing and other steps to preprocess and collect vocal music signals. After fast Fourier transforms and triangle filter processing, the Mel frequency cepstrum coefficient (MFCC) is obtained, and a discrete cosine transform is performed to obtain the signal MFCC characteristic parameters. After calculating the similarity of feature parameters through cosine similarity, the signal similarity matrix is obtained, based on which the vocal music beat spectrum is obtained. The residual structure is optimized by adding Swish and max-out activation functions, respectively, between convolutional neural network layers to build residual convolution layers and deepen the number of convolution layers. The connected time series classification (CTC) is used as the objective loss function. It is applied to the softmax layer to build a deep optimization residual convolutional neural network for speech-music classification model. The pitch spectrum of vocal music is used as the input information of the model to realize the vocal music classification. The experiment proves that the classification accuracy of the design model is higher than 99%; when the iteration reaches 1200, the training loss approaches 0; when the signal-to-noise ratio is 180dB, the sensitivity and specificity are 99.98% and 99.96%, respectively; the accuracy of voice music classification is higher than 99%, and the running time is 0.48 seconds. It has been proven that the model has high classification accuracy, low training loss, good sensitivity and special effects, and can effectively achieve the classification of speech-music.
引用
收藏
页码:52 / 64
页数:13
相关论文
共 50 条