No-reference stereoscopic image quality assessment using 3D visual saliency maps fused with three-channel convolutional neural network

被引:0
|
作者
Chaofeng Li
Lixia Yun
Hui Chen
Shoukun Xu
机构
[1] Shanghai Maritime University,Institute of Logistics Science and Engineering
[2] Jiangnan University,School of Internet of Things Engineering
[3] Changzhou University,School of Information Science and Engineering
来源
关键词
No-reference sterescopic image quality assessment; 3D visual saliency maps; convolutional neural network; Depth saliency map;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, we present a depth-perceived 3D visual saliency map and propose a no-reference stereoscopic image quality assessment (NR SIQA) algorithm using 3D visual saliency maps and convolutional neural network (CNN). Firstly, the 2D salient region of stereoscopic image is generated, and the depth saliency map is calculated, and then, they are combined to compute 3D visual saliency map by linear weighted method, which can better use depth and disparity information of 3D image. Finally, 3D visual saliency map, together with distorted stereoscopic pairs, is fed into a three-channel CNN to learn human subjective perception. We call proposed depth perception and CNN-based SIQA method DPCNN. The performances of DPCNN are evaluated over the popular LIVE 3D Phase I and LIVE 3D Phase II databases, which demonstrates to be competitive with the state-of-the-art NR SIQA algorithms.
引用
收藏
页码:273 / 281
页数:8
相关论文
共 50 条
  • [31] No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement
    Hu, Renzhi
    Luo, Ting
    Jiang, Guowei
    Lin, Zhiqiang
    He, Zhouyan
    ELECTRONICS, 2024, 13 (22)
  • [32] Saliency-enhanced two-stream convolutional network for no-reference image quality assessment
    Ma, Huanhuan
    Cui, Ziguan
    Gan, Zongliang
    Tang, Guijin
    Liu, Feng
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)
  • [33] No-reference Stereoscopic Image Quality Assessment Using Binocular Self-similarity and Deep Neural Network
    Lv, Yaqi
    Yu, Mei
    Jiang, Gangyi
    Shao, Feng
    Peng, Zongju
    Chen, Fen
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 47 : 346 - 357
  • [34] A no-reference 3D virtual reality image quality assessment algorithm based on saliency statistics
    Poreddy, Ajay Kumar Reddy
    Kara, Peter Andras
    Appina, Balasubramanyam
    Simon, Aniko
    OPTICS AND PHOTONICS FOR INFORMATION PROCESSING XV, 2021, 11841
  • [35] No-Reference Video Quality Assessment With 3D Shearlet Transform and Convolutional Neural Networks
    Li, Yuming
    Po, Lai-Man
    Cheung, Chun-Ho
    Xu, Xuyuan
    Feng, Litong
    Yuan, Fang
    Cheung, Kwok-Wai
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (06) : 1044 - 1057
  • [36] Three-Stream 3D deep CNN for no-Reference stereoscopic video quality assessment
    Imani, Hassan
    Islam, Md Baharul
    Arica, Nafiz
    INTELLIGENT SYSTEMS WITH APPLICATIONS, 2022, 13
  • [37] No-Reference 3D Point Cloud Quality Assessment Using Multi-View Projection and Deep Convolutional Neural Network
    Bourbia, Salima
    Karine, Ayoub
    Chetouani, Aladine
    El Hassouni, Mohammed
    Jridi, Maher
    IEEE ACCESS, 2023, 11 : 26759 - 26772
  • [38] No-Reference Image Quality Assessment for Multiple Distortions Using Saliency Map Based on Dual-Convolutional Neural Networks
    Li, Jian-Jun
    Xu, Lan-Lan
    Wang, Zhi-Hui
    Chang, Chin-Chen
    JOURNAL OF INTERNET TECHNOLOGY, 2017, 18 (07): : 1701 - 1710
  • [39] A Novel Patch Variance Biased Convolutional Neural Network for No-Reference Image Quality Assessment
    Po, Lai-Man
    Liu, Mengyang
    Yuen, Wilson Y. F.
    Li, Yuming
    Xu, Xuyuan
    Zhou, Chang
    Wong, Peter H. W.
    Lau, Kin Wai
    Luk, Hon-Tung
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (04) : 1223 - 1229
  • [40] Visual Saliency Based Blind Image Quality Assessment via Convolutional Neural Network
    Li, Jie
    Zhou, Yue
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT VI, 2017, 10639 : 550 - 557