DistilIQA: Distilling Vision Transformers for no-reference perceptual CT image quality assessment

被引:0
|
作者
Baldeon-Calisto, Maria [1 ,4 ]
Rivera-Velastegui, Francisco [2 ]
Lai-Yuen, Susana K. [3 ]
Riofrío, Daniel [4 ]
Pérez-Pérez, Noel [4 ]
Benítez, Diego [4 ]
Flores-Moyano, Ricardo [4 ]
机构
[1] Departamento de Ingeniería Industrial and Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito USFQ, Quito,170157, Ecuador
[2] Departamento de Investigación y Postgrados, Universidad Internacional del Ecuador UIDE, Quito, Ecuador
[3] Department of Industrial and Management Systems Engineering, University of South Florida, Tampa,FL,33620, United States
[4] Colegio de Ciencias e Ingenierías El Politécnico, Universidad San Francisco de Quito USFQ, Quito,170157, Ecuador
关键词
Distillation;
D O I
10.1016/j.compbiomed.2024.108670
中图分类号
学科分类号
摘要
No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a teacher ensemble network is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a student network, comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA‘s remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [1] Distilling Vision Transformers for no-reference Perceptual CT Image Quality Assessment
    Baldeon-Calisto, Maria G.
    Rivera-Velastegui, Francisco
    Lai-Yuen, Susana K.
    Riofrio, Daniel
    Perez-Perez, Noel
    Benitez, Diego
    Flores-Moyano, Ricardo
    [J]. MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926
  • [2] A no-reference perceptual image quality assessment database for learned image codecs
    Zhang, Jiaqi
    Fang, Zhigao
    Yu, Lu
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 88
  • [3] No-reference color image quality assessment: from entropy to perceptual quality
    Xiaoqiao Chen
    Qingyi Zhang
    Manhui Lin
    Guangyi Yang
    Chu He
    [J]. EURASIP Journal on Image and Video Processing, 2019
  • [4] No-reference color image quality assessment: from entropy to perceptual quality
    Chen, Xiaoqiao
    Zhang, Qingyi
    Lin, Manhui
    Yang, Guangyi
    He, Chu
    [J]. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2019, 2019 (01)
  • [5] Combining CNN and transformers for full-reference and no-reference image quality assessment
    Zeng, Chao
    Kwong, Sam
    [J]. NEUROCOMPUTING, 2023, 549
  • [6] No-reference perceptual CT image quality assessment based on a self-supervised learning framework
    Lee, Wonkyeong
    Cho, Eunbyeol
    Kim, Wonjin
    Choi, Hyebin
    Beck, Kyongmin Sarah
    Yoon, Hyun Jung
    Baek, Jongduk
    Choi, Jang-Hwan
    [J]. MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2022, 3 (04):
  • [7] No-Reference Image Quality Assessment Based on the Fusion of Statistical and Perceptual Features
    Varga, Domonkos
    [J]. JOURNAL OF IMAGING, 2020, 6 (08)
  • [8] A No-Reference Image Quality Assessment
    Kemalkar, Aniket K.
    Bairagi, Vinayak K.
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN COMPUTING, COMMUNICATION AND NANOTECHNOLOGY (ICE-CCN'13), 2013, : 462 - 465
  • [9] No-Reference Stereoscopic Image Quality Assessment Based on Image Distortion and Stereo Perceptual Information
    Shen, Liquan
    Fang, Ruigang
    Yao, Yang
    Geng, Xianqiu
    Wu, Dapeng
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2019, 3 (01): : 59 - 72
  • [10] Perceptual Image Quality Assessment with Transformers
    Cheon, Manri
    Yoon, Sung-Jun
    Kang, Byungyeon
    Lee, Junwoo
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 433 - 442