No-Reference Image Quality Assessment via Multibranch Convolutional Neural Networks

被引:24
|
作者
Pan Z. [1 ]
Yuan F. [1 ]
Wang X. [2 ]
Xu L. [3 ]
Shao X. [4 ]
Kwong S. [5 ]
机构
[1] the School of Electrical and Information Engineering, Tianjin University, Tianjin
[2] the College of Computer Science and Software Engineering, Shenzhen University, Shenzhen
[3] the Key Laboratory of Solar Activity, National Astronomical Observatories,, Chinese Academy of Sciences,, Beijing
[4] the School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing
[5] the Department of Computer Science, City University of Hong Kong
来源
基金
中国国家自然科学基金;
关键词
Hierarchical feature merge module (HFMM); multibranch CNN; no-reference image quality assessment (NR-IQA); position features;
D O I
10.1109/TAI.2022.3146804
中图分类号
学科分类号
摘要
No-reference image quality assessment (NR-IQA) aims to evaluate image quality without using the original reference images. Since the early NR-IQA methods based on distortion types were only applicable to specific distortion scenarios, and lack of practicality, it is challenging to designing a universal NR-IQA method. In this article, a multibranch convolutional neural network (MB-CNN) based NR-IQA method is proposed, which includes a spatial-domain feature extractor, a gradient-domain feature extractor, and a weight mechanism. The spatial-domain feature extractor aims to extract the distortion features from the spatial domain. The gradient-domain feature extractor is used to guide the spatial-domain feature extractor to pay more attention to the distortions of the structure information. Particularly, the spatial-domain feature extractor uses the hierarchical feature merge module to realize multiscale feature representation, and the gradient-domain feature extractor uses pyramidal convolution to extract the multiscale structure information of the distorted image. In addition, a position vector is proposed to build the weight mechanism by considering the position relationships between patches and its entire image for improving the final prediction performance. We conduct the experiments on five representative databases: LIVE, TID2013, CSIQ, LIVE MD and Waterloo Exploration Database, and the experimental results show that the proposed NR-IQA method achieves the state-of-the-art performance, which demonstrate the effectiveness of our proposed NR-IQA method. The code ofthe proposed MB-CNN will be released at https://github.com/NUIST-Videocoding/MB-CNN. © 2020 IEEE.
引用
收藏
页码:148 / 160
页数:12
相关论文
共 50 条
  • [21] No-reference synthetic image quality assessment with convolutional neural network and local image saliency
    Xiaochuan Wang
    Xiaohui Liang
    Bailin Yang
    Frederick W.B.Li
    ComputationalVisualMedia, 2019, 5 (02) : 193 - 208
  • [22] Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
    Bosse, Sebastian
    Maniry, Dominique
    Mueller, Klaus-Robert
    Wiegand, Thomas
    Samek, Wojciech
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) : 206 - 219
  • [23] No-reference synthetic image quality assessment with convolutional neural network and local image saliency
    Wang, Xiaochuan
    Liang, Xiaohui
    Yang, Bailin
    Li, Frederick W. B.
    COMPUTATIONAL VISUAL MEDIA, 2019, 5 (02) : 193 - 208
  • [24] Multiscale convolutional neural network for no-reference image quality assessment with saliency detection
    Fan, Xiaodong
    Wang, Yang
    Wang, Changzhong
    Chen, Xiangyue
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42607 - 42619
  • [25] Multiscale convolutional neural network for no-reference image quality assessment with saliency detection
    Xiaodong Fan
    Yang Wang
    Changzhong Wang
    Xiangyue Chen
    Multimedia Tools and Applications, 2022, 81 : 42607 - 42619
  • [26] Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network
    Zhang, Wei
    Qu, Chenfei
    Ma, Lin
    Guan, Jingwei
    Huang, Rui
    PATTERN RECOGNITION, 2016, 59 : 176 - 187
  • [27] CONVOLUTIONAL NEURAL NETWORKS BASED ON RESIDUAL BLOCK FOR NO-REFERENCE IMAGE QUALITY ASSESSMENT OF SMARTPHONE CAMERA IMAGES
    Yao, Chang
    Lu, Yuri
    Liu, Hang
    Hu, Menghan
    Li, Qingli
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [28] No-reference image quality assessment with shearlet transform and deep neural networks
    Li, Yuming
    Po, Lai-Man
    Xu, Xuyuan
    Feng, Litong
    Yuan, Fang
    Cheung, Chun-Ho
    Cheung, Kwok-Wai
    NEUROCOMPUTING, 2015, 154 : 94 - 109
  • [29] No-reference image quality assessment based on residual neural networks (ResNets)
    Ravela, Ravi
    Shirvaikar, Mukul
    Grecos, Christos
    REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2020, 2020, 11401
  • [30] Neural networks for the no-reference assessment of perceived quality
    Gastaldo, P
    Zunino, R
    JOURNAL OF ELECTRONIC IMAGING, 2005, 14 (03) : 1 - 11