Active Fine-Tuning From gMAD Examples Improves Blind Image Quality Assessment

被引:19
|
作者
Wang, Zhihua [1 ]
Ma, Kede [1 ]
机构
[1] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Databases; Adaptation models; Training; Predictive models; Task analysis; Image quality; Blind image quality assessment; deep neural networks; gMAD competition; active learning; STATISTICS; INDEX;
D O I
10.1109/TPAMI.2021.3071759
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The research in image quality assessment (IQA) has a long history, and significant progress has been made by leveraging recent advances in deep neural networks (DNNs). Despite high correlation numbers on existing IQA datasets, DNN-based models may be easily falsified in the group maximum differentiation (gMAD) competition. Here we show that gMAD examples can be used to improve blind IQA (BIQA) methods. Specifically, we first pre-train a DNN-based BIQA model using multiple noisy annotators, and fine-tune it on multiple synthetically distorted images, resulting in a "top-performing" baseline model. We then seek pairs of images by comparing the baseline model with a set of full-reference IQA methods in gMAD. The spotted gMAD examples are most likely to reveal the weaknesses of the baseline, and suggest potential ways for refinement. We query human quality annotations for the selected images in a well-controlled laboratory environment, and further fine-tune the baseline on the combination of human-rated images from gMAD and existing databases. This process may be iterated, enabling active fine-tuning from gMAD examples for BIQA. We demonstrate the feasibility of our active learning scheme on a large-scale unlabeled image set, and show that the fine-tuned quality model achieves improved generalizability in gMAD, without destroying performance on previously seen databases.
引用
收藏
页码:4577 / 4590
页数:14
相关论文
共 50 条
  • [31] Fine-tuning risk assessment with antiepileptic drug use in pregnancy
    Tomson, Torbjoern
    Klein, Pavel
    NEUROLOGY, 2015, 84 (04) : 339 - 340
  • [32] Fine-tuning Siamese Networks to Assess Sport Gestures Quality
    Millan, Megane
    Achard, Catherine
    PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, : 57 - 65
  • [33] Fine-tuning formative and summative assessment in Bachelors' Final Projects
    Hernandez-Leo, Davinia
    Moreno Oliver, Veronica
    2014 INTERNATIONAL SYMPOSIUM ON COMPUTERS IN EDUCATION (SIIE), 2014, : 41 - 45
  • [34] FINE-TUNING RISK ASSESSMENT WITH ANTIEPILEPTIC DRUG USE IN PREGNANCY
    Mawer, George
    Bromley, Rebecca
    Baker, Gus A.
    Meador, Kimford J.
    Clayton-Smith, Jill
    NEUROLOGY, 2015, 85 (15) : 1354 - 1354
  • [35] COSMOLOGICAL FINE-TUNING ARGUMENTS: WHAT (IF ANYTHING) SHOULD WE INFER FROM THE FINE-TUNING OF OUR UNIVERSE FOR LIFE?
    Shields, Jannai
    RELIGIOUS STUDIES REVIEW, 2020, 46 (03) : 393 - 393
  • [36] Is Simplicity that Simple? An Assessment of Richard Swinburne's Argument from Cosmic Fine-Tuning
    Qureshi-Hurst, Emily
    THEOLOGY AND SCIENCE, 2021, 19 (04) : 379 - 389
  • [37] Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
    Diaz-Pachon, Daniel Andres
    Hossjer, Ola
    ENTROPY, 2022, 24 (10)
  • [38] Fine-tuning Pipeline for Hand Image Generation Using Diffusion Model
    Bai, Bingyuan
    Xie, Haoran
    Miyata, Kazunori
    2024 NICOGRAPH INTERNATIONAL, NICOINT 2024, 2024, : 58 - 63
  • [39] Comparison of fine-tuning strategies for transfer learning in medical image classification
    Davila, Ana
    Colan, Jacinto
    Hasegawa, Yasuhisa
    IMAGE AND VISION COMPUTING, 2024, 146
  • [40] Active Learning for Effectively Fine-Tuning Transfer Learning to Downstream Task
    Abul Bashar, Md
    Nayak, Richi
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (02)