SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment

被引:0
|
作者
Chen, Pengfei [1 ]
Li, Leida [2 ]
Wu, Qingbo [3 ]
Wu, Jinjian [2 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Jiangsu, Peoples R China
[2] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
关键词
Distortion; Feature extraction; Task analysis; Transformers; Training; Predictive models; Image quality; Blind image quality assessment; self-supervised pre-training; contrastive learning; INDEX;
D O I
10.1109/LSP.2022.3145326
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Blind image quality assessment (BIQA) has witnessed a flourishing progress due to the rapid advances in deep learning technique. The vast majority of prior BIQA methods try to leverage models pre-trained on ImageNet to mitigate the data shortage problem. These well-trained models, however, can be sub-optimal when applied to BIQA task that varies considerably from the image classification domain. To address this issue, we make the first attempt to leverage the plentiful unlabeled data to conduct self-supervised pre-training for BIQA task. Based on the distorted images generated from the high-quality samples using the designed distortion augmentation strategy, the proposed pre-training is implemented by a feature representation prediction task. Specifically, patch-wise feature representations corresponding to a certain grid are integrated to make prediction for the representation of the patch below it. The prediction quality is then evaluated using a contrastive loss to capture quality-aware information for BIQA task. Experimental results conducted on KADID-10 k and KonIQ-10 k databases demonstrate that the learned pre-trained model can significantly benefit the existing learning based IQA models.
引用
收藏
页码:513 / 517
页数:5
相关论文
共 50 条
  • [1] SPIQ: A Self-Supervised Pre-Trained Model for Image Quality Assessment
    Chen, Pengfei
    Li, Leida
    Wu, Qingbo
    Wu, Jinjian
    [J]. IEEE Signal Processing Letters, 2022, 29 : 513 - 517
  • [2] Speech Enhancement Using Self-Supervised Pre-Trained Model and Vector Quantization
    Zhao, Xiao-Ying
    Zhu, Qiu-Shi
    Zhang, Jie
    [J]. PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 330 - 334
  • [3] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
    Jia, Jinyuan
    Liu, Yupei
    Gong, Neil Zhenqiang
    [J]. 43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 2043 - 2059
  • [4] Self-Supervised Quantization of Pre-Trained Neural Networks for Multiplierless Acceleration
    Vogel, Sebastian
    Springer, Jannik
    Guntoro, Andre
    Ascheid, Gerd
    [J]. 2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 1094 - 1099
  • [5] Self-supervised Bidirectional Prompt Tuning for Entity-enhanced Pre-trained Language Model
    Zou, Jiaxin
    Xu, Xianghong
    Hou, Jiawei
    Yang, Qiang
    Zheng, Hai-Tao
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] Unsupervised Visual Anomaly Detection Using Self-Supervised Pre-Trained Transformer
    Kim, Jun-Hyung
    Kwon, Goo-Rak
    [J]. IEEE ACCESS, 2024, 12 : 127604 - 127613
  • [8] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Kyungjin Cho
    Ki Duk Kim
    Yujin Nam
    Jiheon Jeong
    Jeeyoung Kim
    Changyong Choi
    Soyoung Lee
    Jun Soo Lee
    Seoyeon Woo
    Gil-Sun Hong
    Joon Beom Seo
    Namkug Kim
    [J]. Journal of Digital Imaging, 2023, 36 : 902 - 910
  • [9] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Cho, Kyungjin
    Kim, Ki Duk
    Nam, Yujin
    Jeong, Jiheon
    Kim, Jeeyoung
    Choi, Changyong
    Lee, Soyoung
    Lee, Jun Soo
    Woo, Seoyeon
    Hong, Gil-Sun
    Seo, Joon Beom
    Kim, Namkug
    [J]. JOURNAL OF DIGITAL IMAGING, 2023, 36 (03) : 902 - 910
  • [10] Adapting Pre-Trained Self-Supervised Learning Model for Speech Recognition with Light-Weight Adapters
    Yue, Xianghu
    Gao, Xiaoxue
    Qian, Xinyuan
    Li, Haizhou
    [J]. ELECTRONICS, 2024, 13 (01)