Performance evaluation of real-time speech through a packet network: a random neural networks-based approach

被引:36
|
作者
Mohamed, S [1 ]
Rubino, G [1 ]
Varela, M [1 ]
机构
[1] IRISA, INRIA, Bur U319, F-35042 Rennes, France
关键词
packet audio; random neural networks; G-networks; speech transmission performance; speech quality assessment; network loss models;
D O I
10.1016/j.peva.2003.10.007
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper addresses the problem of quantitatively evaluating the quality of a speech stream transported over the Internet as perceived by the end-user. We propose an approach being able to perform this task automatically and, if necessary, in real time. Our method is based on using G-networks (open networks of queues with positive and negative customers) as Neural Networks (in this case, they are called Random Neural Networks) to learn, in some sense, how humans react vis-a-vis a speech signal that has been distorted by encoding and transmission impairments. This can be used for control purposes, for pricing applications, etc. Our method allows us to study the impact of several Source and network parameters on the quality, which appears to be new (previous work analyzes the effect of one or two selected parameters only). In this paper, we use our technique to study the impact on performance of several basic source and network parameters on a non-interactive speech flow, namely loss rate, loss distribution, codec, forward error correction, and packetization interval, all at the same time. This is important because speech/audio quality is affected by several parameters whose combined effect is neither well identified nor understood. (C) 2003 Elsevier B.V. All rights reserved.
引用
收藏
页码:141 / 161
页数:21
相关论文
共 50 条
  • [31] A residual convolutional neural network based approach for real-time path planning
    Liu, Yang
    Zheng, Zheng
    Qin, Fangyun
    Zhang, Xiaoyi
    Yao, Haonan
    KNOWLEDGE-BASED SYSTEMS, 2022, 242
  • [32] A MODULATION-DOMAIN LOSS FOR NEURAL-NETWORK-BASED REAL-TIME SPEECH ENHANCEMENT
    Vuong, Tyler
    Xia, Yangyang
    Stern, Richard M.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6643 - 6647
  • [33] Developing neural networks-based prediction model of real-time fuel consumption rate for motorcycles: A case study in Vietnam
    Khanh Nguyen Duc
    Yen-Lien T Nguyen
    Anh-Tuan Le
    Tung Le Thanh
    ENERGY SOURCES PART A-RECOVERY UTILIZATION AND ENVIRONMENTAL EFFECTS, 2022, 44 (02) : 3164 - 3177
  • [34] A new approach to real-time training of dynamic neural networks
    Chowdhury, FN
    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2003, 17 (06) : 509 - 521
  • [35] A neural network based real-time gaze tracker
    Piratla, NM
    Jayasumana, AP
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2002, 25 (03) : 179 - 196
  • [36] A Real-Time Neural Network based Color Classifier
    Penharbel, Eder Augusto
    Goncalves, Ben Hur
    Francelin Romero, Roseli Aparecida
    2008 5TH LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS 2008), 2008, : 35 - 39
  • [37] TCNN: TEMPORAL CONVOLUTIONAL NEURAL NETWORK FOR REAL-TIME SPEECH ENHANCEMENT IN THE TIME DOMAIN
    Pandey, Ashutosh
    Wang, DeLiang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6875 - 6879
  • [38] A neural network approach to real-time dielectric characterization of materials
    Olmi, R
    Pelosi, G
    Riminesi, C
    Tedesco, M
    MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, 2002, 35 (06) : 463 - 465
  • [39] Neural network evaluation of real-time texture mapping algorithms
    Cook, AR
    ESS'98 - SIMULATION TECHNOLOGY: SCIENCE AND ART, 1998, : 643 - 647
  • [40] Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach
    Dominguez-Morales, Juan P.
    Liu, Qian
    James, Robert
    Gutierrez-Galan, Daniel
    Jimenez-Fernandez, Angel
    Davidson, Simon
    Furber, Steve
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,