Speech-Music Classification Model Based on Improved Neural Network and Beat Spectrum

被引:0
|
作者
Huang, Chun [1 ]
Wei, HeFu [2 ]
机构
[1] Gen Educ & Int Coll, Chongqing Coll Elect Engn, Chongqing 400031, Peoples R China
[2] Arts Coll Sichuan Univ, Chengdu 401331, Peoples R China
关键词
Vocal music; classification model; beat spectrum; feature parameter extraction; cosine similarity; convolutional neural network; FREQUENCY;
D O I
10.14569/IJACSA.2023.0140706
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A speech-music classification method according to a developed neural system and beat spectrum is proposed to achieve accurate classification of speech-music through preemphasis, endpoint detection, framing, windowing and other steps to preprocess and collect vocal music signals. After fast Fourier transforms and triangle filter processing, the Mel frequency cepstrum coefficient (MFCC) is obtained, and a discrete cosine transform is performed to obtain the signal MFCC characteristic parameters. After calculating the similarity of feature parameters through cosine similarity, the signal similarity matrix is obtained, based on which the vocal music beat spectrum is obtained. The residual structure is optimized by adding Swish and max-out activation functions, respectively, between convolutional neural network layers to build residual convolution layers and deepen the number of convolution layers. The connected time series classification (CTC) is used as the objective loss function. It is applied to the softmax layer to build a deep optimization residual convolutional neural network for speech-music classification model. The pitch spectrum of vocal music is used as the input information of the model to realize the vocal music classification. The experiment proves that the classification accuracy of the design model is higher than 99%; when the iteration reaches 1200, the training loss approaches 0; when the signal-to-noise ratio is 180dB, the sensitivity and specificity are 99.98% and 99.96%, respectively; the accuracy of voice music classification is higher than 99%, and the running time is 0.48 seconds. It has been proven that the model has high classification accuracy, low training loss, good sensitivity and special effects, and can effectively achieve the classification of speech-music.
引用
收藏
页码:52 / 64
页数:13
相关论文
共 50 条
  • [31] Graphite Classification Based on Improved Convolution Neural Network
    Liu, Guangjun
    Xu, Xiaoping
    Yu, Xiangjia
    Wang, Feng
    PROCESSES, 2021, 9 (11)
  • [32] Adaptive Optimization Based Neural Network for Classification of Stuttered Speech
    Manjula, G.
    Shivakumar, M.
    Geetha, Y. V.
    PROCEEDINGS OF 2019 THE 3RD INTERNATIONAL CONFERENCE ON CRYPTOGRAPHY, SECURITY AND PRIVACY (ICCSP 2019) WITH WORKSHOP 2019 THE 4TH INTERNATIONAL CONFERENCE ON MULTIMEDIA AND IMAGE PROCESSING (ICMIP 2019), 2019, : 93 - 98
  • [33] Speech enhancement based on noise classification and deep neural network
    Wang, Wenbo
    Liu, Houguang
    Yang, Jianhua
    Cao, Guohua
    Hua, Chunli
    MODERN PHYSICS LETTERS B, 2019, 33 (17):
  • [34] Research on Music Classification Based on MFCC and BP Neural Network
    LiuYongchun
    Hong, Song
    Jing, Yang
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INFORMATION, ELECTRONICS AND COMPUTER, 2014, 59 : 129 - 132
  • [35] Music emotional classification research based on BP neural network
    Wang, Jijun
    Chen, Ning
    Zhang, Kuo
    ICIC Express Letters, 2010, 4 (6 A): : 2075 - 2079
  • [36] Classification of Optical Music Symbols based on Combined Neural Network
    Wen, Cuihong
    Rebelo, Ana
    Zhang, Jing
    Cardoso, Jaime
    2014 INTERNATIONAL CONFERENCE ON MECHATRONICS AND CONTROL (ICMC), 2014, : 419 - 423
  • [37] Neural Network Music Genre Classification
    Pelchat, Nikki
    Gelowitz, Craig M.
    CANADIAN JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING-REVUE CANADIENNE DE GENIE ELECTRIQUE ET INFORMATIQUE, 2020, 43 (03): : 170 - 173
  • [38] Clean speech/speech with background music classification using HNGD spectrum
    Khonglah B.K.
    Prasanna S.R.M.
    International Journal of Speech Technology, 2017, 20 (04) : 1023 - 1036
  • [39] A Modified Convolutional Neural Network for ECG Beat Classification
    Yang, Lulu
    Zhu, Junjiang
    Yan, Tianhong
    Wang, Zhaoyang
    Wu, Shangshi
    JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS, 2020, 10 (03) : 654 - 660
  • [40] A trainable neural network ensemble for ECG beat classification
    Sajedin, Atena
    Zakernejad, Shokoufeh
    Faridi, Soheil
    Javadi, Mehrdad
    Ebrahimpour, Reza
    World Academy of Science, Engineering and Technology, 2010, 69 : 788 - 794