Towards robust diagnosis of COVID-19 using vision self-attention transformer

被引:0
|
作者
Fozia Mehboob
Abdul Rauf
Richard Jiang
Abdul Khader Jilani Saudagar
Khalid Mahmood Malik
Muhammad Badruddin Khan
Mozaherul Hoque Abdul Hasnat
Abdullah AlTameem
Mohammed AlKhathami
机构
[1] Knightec AB,LIRA Center
[2] Lancaster University,Department of Computer Science and Engineering
[3] Oakland University,Information Systems Department, College of Computer and Information Sciences
[4] Imam Mohammad Ibn Saud Islamic University (IMSIU),undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
The outbreak of COVID-19, since its appearance, has affected about 200 countries and endangered millions of lives. COVID-19 is extremely contagious disease, and it can quickly incapacitate the healthcare systems if infected cases are not handled timely. Several Conventional Neural Networks (CNN) based techniques have been developed to diagnose the COVID-19. These techniques require a large, labelled dataset to train the algorithm fully, but there are not too many labelled datasets. To mitigate this problem and facilitate the diagnosis of COVID-19, we developed a self-attention transformer-based approach having self-attention mechanism using CT slices. The architecture of transformer can exploit the ample unlabelled datasets using pre-training. The paper aims to compare the performances of self-attention transformer-based approach with CNN and Ensemble classifiers for diagnosis of COVID-19 using binary Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and multi-class Hybrid-learning for UnbiaSed predicTion of COVID-19 (HUST-19) CT scan dataset. To perform this comparison, we have tested Deep learning-based classifiers and ensemble classifiers with proposed approach using CT scan images. Proposed approach is more effective in detection of COVID-19 with an accuracy of 99.7% on multi-class HUST-19, whereas 98% on binary class SARS-CoV-2 dataset. Cross corpus evaluation achieves accuracy of 93% by training the model with Hust19 dataset and testing using Brazilian COVID dataset.
引用
收藏
相关论文
共 50 条
  • [1] Towards robust diagnosis of COVID-19 using vision self-attention transformer
    Mehboob, Fozia
    Rauf, Abdul
    Jiang, Richard
    Saudagar, Abdul Khader Jilani
    Malik, Khalid Mahmood
    Khan, Muhammad Badruddin
    Hasnat, Mozaherul Hoque Abdul
    AlTameem, Abdullah
    AlKhathami, Mohammed
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [2] Lite Vision Transformer with Enhanced Self-Attention
    Yang, Chenglin
    Wang, Yilin
    Zhang, Jianming
    Zhang, He
    Wei, Zijun
    Lin, Zhe
    Yuille, Alan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11988 - 11998
  • [3] COVID-19 lesion segmentation using convolutional LSTM for self-attention
    Killekar, Aditya
    Grodecki, Kajetan
    Lin, Andrew
    Cadet, Sebastien
    McElhinney, Priscilla
    Razipour, Aryabod
    Chan, Cato
    Pressman, Barry D.
    Julien, Peter
    Chen, Peter
    Simon, Judit
    Maurovich-Horvat, Pal
    Gaibazzi, Nicola
    Thakur, Udit
    Mancini, Elisabetta
    Agalbato, Cecilia
    Munechika, Jiro
    Matsumoto, Hidenari
    Mene, Roberto
    Parati, Gianfranco
    Cernigliaro, Franco
    Nerlekar, Nitesh
    Torlasco, Camilla
    Pontone, Gianluca
    Dey, Damini
    Slomka, Piotr J.
    MEDICAL IMAGING 2022: IMAGE PROCESSING, 2022, 12032
  • [4] COViT-GAN: Vision Transformer for COVID-19 Detection in CT Scan Images with Self-Attention GAN for Data Augmentation
    Ambita, Ara Abigail E.
    Boquio, Eujene Nikka, V
    Naval, Prospero C., Jr.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 587 - 598
  • [5] Vision Transformer Based on Reconfigurable Gaussian Self-attention
    Zhao L.
    Zhou J.-K.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (09): : 1976 - 1988
  • [6] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
    Pan, Xuran
    Ye, Tianzhu
    Xia, Zhuofan
    Song, Shiji
    Huang, Gao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091
  • [7] Robust Visual Tracking Using Hierarchical Vision Transformer with Shifted Windows Multi-Head Self-Attention
    Gao, Peng
    Zhang, Xin-Yue
    Yang, Xiao-Li
    Ni, Jian-Cheng
    Wang, Fei
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) : 161 - 164
  • [8] Lightweight Vision Transformer with Spatial and Channel Enhanced Self-Attention
    Zheng, Jiahao
    Yang, Longqi
    Li, Yiying
    Yang, Ke
    Wang, Zhiyuan
    Zhou, Jun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1484 - 1488
  • [9] ATTENTIONLITE: TOWARDS EFFICIENT SELF-ATTENTION MODELS FOR VISION
    Kundu, Souvik
    Sundaresan, Sairam
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2225 - 2229
  • [10] Attention Guided CAM: Visual Explanations of Vision Transformer Guided by Self-Attention
    Leem, Saebom
    Seo, Hyunseok
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 2956 - 2964