No-Reference Video Quality Assessment Using the Temporal Statistics of Global and Local Image Features

被引:2
|
作者
Varga, Domonkos [1 ]
机构
[1] Ronin Inst, Montclair, NJ 07043 USA
关键词
no-reference video quality assessment; quality-aware features; multi-feature fusion; PREDICTION; MODEL;
D O I
10.3390/s22249696
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
During acquisition, storage, and transmission, the quality of digital videos degrades significantly. Low-quality videos lead to the failure of many computer vision applications, such as object tracking or detection, intelligent surveillance, etc. Over the years, many different features have been developed to resolve the problem of no-reference video quality assessment (NR-VQA). In this paper, we propose a novel NR-VQA algorithm that integrates the fusion of temporal statistics of local and global image features with an ensemble learning framework in a single architecture. Namely, the temporal statistics of global features reflect all parts of the video frames, while the temporal statistics of local features reflect the details. Specifically, we apply a broad spectrum of statistics of local and global features to characterize the variety of possible video distortions. In order to study the effectiveness of the method introduced in this paper, we conducted experiments on two large benchmark databases, i.e., KoNViD-1k and LIVE VQC, which contain authentic distortions, and we compared it to 14 other well-known NR-VQA algorithms. The experimental results show that the proposed method is able to achieve greatly improved results on the considered benchmark datasets. Namely, the proposed method exhibits significant progress in performance over other recent NR-VQA approaches.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] NATURAL DCT STATISTICS APPROACH TO NO-REFERENCE IMAGE QUALITY ASSESSMENT
    Saad, Michele A.
    Bovik, Alan C.
    Charrier, Christophe
    2010 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2010, : 313 - 316
  • [42] No-Reference Video Quality Assessment Metric Using Spatiotemporal Features Through LSTM
    Kwong, Ngai-Wing
    Tsang, Sik-Ho
    Chan, Yui-Lam
    Lun, Daniel Pak-Kong
    Lee, Tsz-Kwan
    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2021, 2021, 11766
  • [43] Graph-based No-Reference Video Quality Assessment Using Spatial Features
    Department of Artificial Intelligence
    不详
    Int. Conf. Signal Process. Commun., SPCOM, 2024,
  • [44] No-Reference Video Quality Assessment based on temporal information extraction
    Zhang, Zhaolin
    Shi, Haoshan
    2013 2ND INTERNATIONAL SYMPOSIUM ON INSTRUMENTATION AND MEASUREMENT, SENSOR NETWORK AND AUTOMATION (IMSNA), 2013, : 925 - 927
  • [45] NO-REFERENCE STEREOSCOPIC VIDEO QUALITY ASSESSMENT ALGORITHM USING JOINT MOTION AND DEPTH STATISTICS
    Appina, Balasubramanyam
    Jalli, Akshith
    Battula, Shanmukha Srinivas
    Channappayya, Sumohana S.
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2800 - 2804
  • [46] A Fast and Efficient No-Reference Video Quality Assessment Algorithm Using Video Action Recognition Features
    Suresh, N.
    Mylavarapu, Pavan Manesh
    Mahankali, Naga Sailaja
    Channappayya, Sumohana S.
    2022 NATIONAL CONFERENCE ON COMMUNICATIONS (NCC), 2022, : 402 - 406
  • [47] Editorial Expression of Concern: No-Reference Video Quality Assessment Based on the Temporal Pooling of Deep Features
    Domonkos Varga
    Neural Processing Letters, 2021, 53 : 2379 - 2380
  • [48] Editorial Expression of Concern: No-Reference Video Quality Assessment Based on the Temporal Pooling of Deep Features
    Varga, Domonkos
    NEURAL PROCESSING LETTERS, 2021, 53 (03) : 2379 - 2380
  • [49] A No-Reference Image Quality Assessment
    Kemalkar, Aniket K.
    Bairagi, Vinayak K.
    2013 IEEE INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN COMPUTING, COMMUNICATION AND NANOTECHNOLOGY (ICE-CCN'13), 2013, : 462 - 465
  • [50] No-reference image quality assessment using statistical wavelet-packet features
    Hadizadeh, Hadi
    Bajic, Ivan V.
    PATTERN RECOGNITION LETTERS, 2016, 80 : 144 - 149