Image quality assessment based on self-supervised learning and knowledge distillation

被引:5
|
作者
Sang, Qingbing [1 ,2 ]
Shu, Ziru [1 ]
Liu, Lixiong [2 ,3 ]
Hu, Cong [1 ]
Wu, Qin [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
[2] Jiangnan Univ, Jiangsu Key Lab Media Design & Software Technol, Wuxi 214122, Peoples R China
[3] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
关键词
Knowledge distillation; Self-supervised learning; Image quality evaluation;
D O I
10.1016/j.jvcir.2022.103708
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks have achieved great success in a wide range of machine learning tasks due to their excellent ability to learn rich semantic features from high-dimensional data. Deeper networks have been successful in the field of image quality assessment to improve the performance of image quality assessment models. The success of deep neural networks majorly comes along with both big models with hundreds of millions of parameters and the availability of numerous annotated datasets. However, the lack of large-scale labeled data leads to the problems of over-fitting and poor generalization of deep learning models. Besides, these models are huge in size, demanding heavy computation power and failing to be deployed on edge devices. To deal with the challenge, we propose an image quality assessment based on self-supervised learning and knowledge distillation. First, the self-supervised learning of soft target prediction given by the teacher network is carried out, and then the student network is jointly trained to use soft target and label on knowledge distillation. Experiments on five benchmark databases show that the proposed method is superior to the teacher network and even outperform the state-of-the-art strategies. Furthermore, the scale of our model is much smaller than the teacher model and can be deployed in edge devices for smooth inference.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Self-supervised Knowledge Distillation Using Singular Value Decomposition
    Lee, Seung Hyun
    Kim, Dae Ha
    Song, Byung Cheol
    COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 : 339 - 354
  • [32] Poster: Self-Supervised Quantization-Aware Knowledge Distillation
    Zhao, Kaiqi
    Zhao, Ming
    2023 IEEE/ACM SYMPOSIUM ON EDGE COMPUTING, SEC 2023, 2023, : 250 - 252
  • [33] Image classification framework based on contrastive self-supervised learning
    Zhao H.-W.
    Zhang J.-R.
    Zhu J.-P.
    Li H.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (08): : 1850 - 1856
  • [34] SSSD: Self-Supervised Self Distillation
    Chen, Wei-Chi
    Chu, Wei-Ta
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2769 - 2776
  • [35] NVST Image Denoising Based on Self-Supervised Deep Learning
    Lu Xianwei
    Liu Hui
    Shang Zhenhong
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (06)
  • [36] Revisiting Image Aesthetic Assessment via Self-Supervised Feature Learning
    Sheng, Kekai
    Dong, Weiming
    Chai, Menglei
    Wang, Guohui
    Zhou, Peng
    Huang, Feiyue
    Hu, Bao-Gang
    Ji, Rongrong
    Ma, Chongyang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5709 - 5716
  • [37] Self-supervised Learning for Sonar Image Classification
    Preciado-Grijalva, Alan
    Wehbe, Bilal
    Firvida, Miguel Bande
    Valdenegro-Toro, Matias
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1498 - 1507
  • [38] Pathological Image Contrastive Self-supervised Learning
    Qin, Wenkang
    Jiang, Shan
    Luo, Lin
    RESOURCE-EFFICIENT MEDICAL IMAGE ANALYSIS, REMIA 2022, 2022, 13543 : 85 - 94
  • [39] Self-supervised Learning for Astronomical Image Classification
    Martinazzo, Ana
    Espadoto, Mateus
    Hirata, Nina S. T.
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4176 - 4182
  • [40] Learning by Distillation: A Self-Supervised Learning Framework for Optical Flow Estimation
    Liu, Pengpeng
    Lyu, Michael R.
    King, Irwin
    Xu, Jia
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5026 - 5041