Performance evaluation of real-time speech through a packet network: a random neural networks-based approach

被引:36
|
作者
Mohamed, S [1 ]
Rubino, G [1 ]
Varela, M [1 ]
机构
[1] IRISA, INRIA, Bur U319, F-35042 Rennes, France
关键词
packet audio; random neural networks; G-networks; speech transmission performance; speech quality assessment; network loss models;
D O I
10.1016/j.peva.2003.10.007
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper addresses the problem of quantitatively evaluating the quality of a speech stream transported over the Internet as perceived by the end-user. We propose an approach being able to perform this task automatically and, if necessary, in real time. Our method is based on using G-networks (open networks of queues with positive and negative customers) as Neural Networks (in this case, they are called Random Neural Networks) to learn, in some sense, how humans react vis-a-vis a speech signal that has been distorted by encoding and transmission impairments. This can be used for control purposes, for pricing applications, etc. Our method allows us to study the impact of several Source and network parameters on the quality, which appears to be new (previous work analyzes the effect of one or two selected parameters only). In this paper, we use our technique to study the impact on performance of several basic source and network parameters on a non-interactive speech flow, namely loss rate, loss distribution, codec, forward error correction, and packetization interval, all at the same time. This is important because speech/audio quality is affected by several parameters whose combined effect is neither well identified nor understood. (C) 2003 Elsevier B.V. All rights reserved.
引用
收藏
页码:141 / 161
页数:21
相关论文
共 50 条
  • [41] Real-Time and Continuous Hand Gesture Spotting: an Approach Based on Artificial Neural Networks
    Neto, Pedro
    Pereira, Dario
    Norberto Pires, J.
    Paulo Moreira, A.
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 178 - 183
  • [42] Towards Real-time Speech Emotion Recognition using Deep Neural Networks
    Fayek, H. M.
    Lech, M.
    Cavedon, L.
    2015 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), 2015,
  • [43] A comparison of neural networks for real-time emotion recognition from speech signals
    Department of Software Engineering, Izmir University of Economics, Sakarya Cad No.156, Balcova, Izmir 35330, Turkey
    WSEAS Trans. Signal Process., 2009, 3 (116-125):
  • [44] Efficient Gated Convolutional Recurrent Neural Networks for Real-Time Speech Enhancement
    Fazal-E-Wahab
    Ye, Zhongfu
    Saleem, Nasir
    Ali, Hamza
    Ali, Imad
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2024, 9 (01):
  • [45] A convolutional neural network based approach towards real-time hard hat detection
    Xie, Zaipeng
    Liu, Hanxiang
    Li, Zewen
    He, Yuechao
    PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL CONFERENCE ON PROGRESS IN INFORMATICS AND COMPUTING (PIC), 2018, : 430 - 434
  • [46] Real-time monitoring of sports performance based on ensemble learning algorithm and neural network
    Zhou, Yucheng
    Lu, Wen
    Zhang, YingQiu
    SOFT COMPUTING, 2023,
  • [47] Real-time implementation and performance evaluation of speech classifiers in speech analysis-synthesis
    Kumar, Sandeep
    ETRI JOURNAL, 2021, 43 (01) : 82 - 94
  • [48] A Real-Time Convolutional Neural Network Based Speech Enhancement for Hearing Impaired Listeners Using Smartphone
    Bhat, Gautam S.
    Shankar, Nikhil
    Reddy, Chandan K. A.
    Panahi, Issa M. S.
    IEEE ACCESS, 2019, 7 : 78421 - 78433
  • [49] A Novel Proposed Approach For Real-Time Scheduling Based On Neural Networks Approach With Minimization of Power Consumption
    Rhaiem, Ghofrane
    Gharsellaoui, Hamza
    Ben Ahmed, Samir
    2016 WORLD SYMPOSIUM ON COMPUTER APPLICATIONS & RESEARCH (WSCAR), 2016, : 98 - 103
  • [50] Real-time Multi-channel Speech Enhancement Based on Neural Network Masking with Attention Model
    Xue, Cheng
    Huang, Weilong
    Chen, Weiguang
    Feng, Jinwei
    INTERSPEECH 2021, 2021, : 1862 - 1866