No Reference Video Quality Objective Assessment Based on Multilayer BP Neural Network

被引:0
|
作者
Yao J.-C. [1 ,2 ]
Shen J. [1 ]
Huang C.-R. [1 ]
机构
[1] School of Computer Engineering, Nanjing Institute of Technology, Nanjing
[2] School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an
来源
基金
中国国家自然科学基金;
关键词
Delay; Neural networks; Video contents; Video quality evaluation;
D O I
10.16383/j.aas.c190539
中图分类号
学科分类号
摘要
Machine learning has a great advantage in the regression of video quality assessment (VQA) model and can greatly improve the accuracy of built model. To this end, a reasonable BP neural network is designed, and taking the feature values of the distorted video contents, code and decode distortion, transmission distortion, and visual perception effect as inputs, a no reference VQA model is constructed by training them with the samples of the built video databases. In modeling, firstly, 11 features are used to describe the four main factors that affect video quality, which are the brightness and chroma of image and their visual perception, the gray gradient expectation of image, the blur degree of image, the local contrast, the motion vectors and their visual perception, the scene switching feature, the bitrate, the initial delay, the single interrupt delay, the interrupt frequency and the average time of interrupt. And the feature parameters of a large number of video samples in the two video databases established are extracted. Then by using these feature parameters as inputs, the BP neural network is trained to construct our VQA model. Finally, the proposed model is tested and compared with 14 existing VQA models to study its accuracy, complexity and generalization performance. The experimental results show that the accuracy of the proposed model is significantly higher than those of 14 existing models, and the lowest increase was 4.34%. And in the generalization performance, it is better than 14 models. Moreover, the complexity of the proposed model is at the intermediate in the 15 VQA methods. Comprehensively analyzing the accuracy, generalization performance and complexity of the proposed model, it is shown that it is a good VQA model based on machine learning. Copyright ©2019 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:594 / 607
页数:13
相关论文
共 34 条
  • [1] Vega M T, Perra C, Turck F D, Liotta A., A review of predictive quality of experience management in video streaming services, IEEE Transactions on Broadcasting, 64, 2, pp. 432-445, (2018)
  • [2] James N, Pablo S G, Jose M A C, Wang Q., 5G-QoE: QoE modelling for Ultra-HD video streaming in 5G networks, IEEE Transactions on Broadcasting, 64, 2, pp. 621-634, (2018)
  • [3] Demostenes Z R, Renata L R, Eduardo A C, Julia A, Graca B., Video quality assessment in video streaming services considering user preference for video content, IEEE Transactions on Consumer Electronics, 60, 3, pp. 436-444, (2014)
  • [4] Nan D, Bi D Y, Ma S P, Fan Z L, He L Y., A quality assessment method with classified-learning for dehazed images, Acta Automatica Sinica, 42, 2, pp. 270-278, (2016)
  • [5] Gao Xin-Bo, Quality Assessment Methods for Visual Imformation, pp. 72-85, (2011)
  • [6] Feng X, Yang D, Zhang L., Saliency Variation Based Quality Assessment for Packet-loss-impaired Videos, Acta Automatica Sinica, 37, 11, pp. 1322-1331, (2011)
  • [7] Chandler D M, Hemami S S., VSNR: A wavelet-based visual signal-to-noise ratio for natural images, IEEE Transactions on Image Processing, 16, 9, pp. 2284-2298, (2007)
  • [8] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P., Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, 13, 4, pp. 600-612, (2004)
  • [9] Pinson M H, Wolf S., New standardized method for objectively measuring video quality, IEEE Transactions on Broadcasting, 50, 3, pp. 312-322, (2004)
  • [10] Vu P V, Vu C T, Chandler D M., A spatiotemporal most-apparent-distortion model for video quality assessment, Proceedings of the 2011 IEEE International Conference on Image Processing (ICIP), pp. 2505-2508, (2011)