Video quality assessment using visual attention computational models

被引:12
|
作者
Akamine, Welington Y. L. [1 ]
Farias, Mylene C. Q. [1 ]
机构
[1] Univ Brasilia UnB, Dept Elect Engn, BR-70919970 Brasilia, DF, Brazil
关键词
video quality metrics; visual attention; quality assessment; artifacts; METRICS;
D O I
10.1117/1.JEI.23.6.061107
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A recent development in the area of image and video quality consists of trying to incorporate aspects of visual attention in the design of visual quality metrics, mostly using the assumption that visual distortions appearing in less salient areas might be less visible and, therefore, less annoying. This research area is still in its infancy and results obtained by different groups are not yet conclusive. Among the works that have reported some improvements, most use subjective saliency maps, i.e., saliency maps generated from eye-tracking data obtained experimentally. Other works address the image quality problem, not focusing on how to incorporate visual attention into video signals. We investigate the benefits of incorporating bottom-up video saliency maps (obtained using Itti's computational model) into video quality metrics. In particular, we compare the performance of four full-reference video quality metrics with their modified versions, which had saliency maps incorporated into the algorithm. Results show that the addition of video saliency maps improve the performance of most quality metrics tested, but the highest gains were obtained for the metrics that only took into consideration spatial degradations. (C) 2014 SPIE and IS&T
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Effects of temporal jitter on video quality: Assessment using psychophysical and computational modeling methods
    Chang, YC
    Carney, T
    Klein, SA
    Messerschmitt, DG
    Zakhor, A
    HUMAN VISION AND ELECTRONIC IMAGING III, 1998, 3299 : 173 - 179
  • [42] A Brief Overview of Computational Models of Spatial, Temporal, and Feature Visual Attention
    Sperling, George
    INVARIANCES IN HUMAN INFORMATION PROCESSING, 2018, : 143 - 182
  • [43] Home video visual quality assessment with spatiotemporal factors
    Mei, Tao
    Hua, Xian-Sheng
    Zhu, Cai-Zhi
    Zhou, He-Qin
    Li, Shipeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2007, 17 (06) : 699 - 706
  • [44] Integrates Spatiotemporal Visual Stimuli for Video Quality Assessment
    Guo, Wenzhong
    Zhang, Kairui
    Ke, Xiao
    IEEE TRANSACTIONS ON BROADCASTING, 2024, 70 (01) : 223 - 237
  • [45] Improving the Visual Quality of Video Frame Prediction Models Using the Perceptual Straightening Hypothesis
    Kancharla, Parimala
    Channappayya, Sumohana S.
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 2167 - 2171
  • [46] VIDEO QUALITY PREDICTION USING VOXEL-WISE FMRI MODELS OF THE VISUAL CORTEX
    Mahankali, Naga Sailaja
    Channappayya, Sumohana S.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2125 - 2129
  • [47] Understanding Scenery Quality: A Visual Attention Measure and Its Computational Model
    Loh, Yuen Peng
    Tong, Song
    Liang, Xuefeng
    Kumada, Takatsune
    Chan, Chee Seng
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 289 - 297
  • [48] A computational model of visual attention
    Kohonen, T
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 3238 - 3243
  • [49] A computational theory of visual attention
    Bundesen, C
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY OF LONDON SERIES B-BIOLOGICAL SCIENCES, 1998, 353 (1373) : 1271 - 1281
  • [50] Computational modelling of visual attention
    Itti, L
    Koch, C
    NATURE REVIEWS NEUROSCIENCE, 2001, 2 (03) : 194 - 203