Enhance fashion classification of mosquito vector species via self-supervised vision transformer

被引:0
|
作者
Veerayuth Kittichai [1 ]
Morakot Kaewthamasorn [2 ]
Tanawat Chaiphongpachara [3 ]
Sedthapong Laojun [3 ]
Tawee Saiwichai [4 ]
Kaung Myat Naing [6 ]
Teerawat Tongloy [6 ]
Siridech Boonsang [5 ]
Santhad Chuwongin [6 ]
机构
[1] King Mongkut’s Institute of Technology Ladkrabang,Faculty of Medicine
[2] Chulalongkorn University,Veterinary Parasitology Research Unit, Faculty of Veterinary Science
[3] Suan Sunandha Rajabhat University,Department of Public Health and Health Promotion, College of Allied Health Science
[4] Mahidol University,Department of Parasitology and Entomology, Faculty of Public Health
[5] King Mongkut’s Institute of Technology Ladkrabang,Department of Electrical Engineering, School of Engineering
[6] King Mongkut’s Institute of Technology Ladkrabang,College of Advanced Manufacturing Innovation
关键词
Mosquito vector species; Artificial intelligence; Self-distillation with unlabeled data; Mobile phone application;
D O I
10.1038/s41598-024-83358-8
中图分类号
学科分类号
摘要
Vector-borne diseases pose a major worldwide health concern, impacting more than 1 billion people globally. Among various blood-feeding arthropods, mosquitoes stand out as the primary carriers of diseases significant in both medical and veterinary fields. Hence, comprehending their distinct role fulfilled by different mosquito types is crucial for efficiently addressing and enhancing control measures against mosquito-transmitted diseases. The conventional method for identifying mosquito species is laborious and requires significant effort to learn. Classification is subsequently carried out by skilled laboratory personnel, rendering the process inherently time-intensive and restricting the task to entomology specialists. Therefore, integrating artificial intelligence with standard taxonomy, such as molecular techniques, is essential for accurate mosquito species identification. Advancement in novel tools with artificial intelligence has challenged the task of developing an automated system for sample collection and identification. This study aims to introduce a self-supervised Vision Transformer supporting an automatic model for classifying mosquitoes found across various regions of Thailand. The objective is to utilize self-distillation with unlabeled data (DINOv2) to develop models on a mobile phone-captured dataset containing 16 species of female mosquitoes, including those known for transmitting malaria and dengue. The DINOv2 model surpassed the ViT baseline model in precision and recall for all mosquito species. When compared on a species-specific level, utilizing the DINOv2 model resulted in reductions in false negatives and false positives, along with enhancements in precision and recall values, in contrast to the baseline model, across all mosquito species. Notably, at least 10 classes exhibited outstanding performance, achieving above precision and recall rates exceeding 90%. Remarkably, when applying cropping techniques to the dataset instead of utilizing the original photographs, there was a significant improvement in performance across all DINOv2 models studied. This is demonstrated by an increase in recall to 87.86%, precision to 91.71%, F1 score to 88.71%, and accuracy to 98.45%, respectively. Malaria mosquito species can be easily distinguished from another genus like Aedes, Mansonia, Armigeres, and Culex, respectively. While classifying malaria vector species presented challenges for the DINOv2 model, utilizing the cropped images enhanced precision by up to 96% for identifying one of the top three malaria vectors in Thailand, Anopheles minimus. A proficiently trained DINOv2 model, coupled with effective data management, can contribute to the development of a mobile phone application. Furthermore, this method shows promise in supporting field professionals who are not entomology experts in effectively addressing pathogens responsible for diseases transmitted by female mosquitoes.
引用
下载
收藏
相关论文
共 50 条
  • [1] Histopathological Image Classification based on Self-Supervised Vision Transformer and Weak Labels
    Gul, Ahmet Gokberk
    Cetin, Oezdemir
    Reich, Christoph
    Flinner, Nadine
    Prangemeier, Tim
    Koeppl, Heinz
    MEDICAL IMAGING 2022: DIGITAL AND COMPUTATIONAL PATHOLOGY, 2022, 12039
  • [2] Enhancing mosquito classification through self-supervised learning
    Ratana Charoenpanyakul
    Veerayuth Kittichai
    Songpol Eiamsamang
    Patchara Sriwichai
    Natchapon Pinetsuksai
    Kaung Myat Naing
    Teerawat Tongloy
    Siridech Boonsang
    Santhad Chuwongin
    Scientific Reports, 14 (1)
  • [3] MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer
    Zhao, Chaoqiang
    Zhang, Youmin
    Poggi, Matteo
    Tosi, Fabio
    Guo, Xianda
    Zhu, Zheng
    Huang, Guan
    Tang, Yang
    Mattoccia, Stefano
    2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 668 - 678
  • [4] Multi-scale vision transformer classification model with self-supervised learning and dilated convolution
    Xing, Liping
    Jin, Hongmei
    Li, Hong-an
    Li, Zhanli
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [5] Multimodal Image Fusion via Self-Supervised Transformer
    Zhang, Jing
    Liu, Yu
    Liu, Aiping
    Xie, Qingguo
    Ward, Rabab
    Wang, Z. Jane
    Chen, Xun
    IEEE SENSORS JOURNAL, 2023, 23 (09) : 9796 - 9807
  • [6] Self-Supervised Transformer Networks for Error Classification of Tightening Traces
    Wilkman, Dennis Bogatov
    Tang, Lifei
    Morozovska, Kateryna
    Bragone, Federica
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1373 - 1380
  • [7] Self-supervised Video Transformer
    Ranasinghe, Kanchana
    Naseer, Muzammal
    Khan, Salman
    Khan, Fahad Shahbaz
    Ryoo, Michael S.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2864 - 2874
  • [8] A Hierarchical Vision Transformer Using Overlapping Patch and Self-Supervised Learning
    Ma, Yaxin
    Li, Ming
    Chang, Jun
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [9] Self-Supervised Time Series Classification Based on LSTM and Contrastive Transformer
    ZOU Yuanhao
    ZHANG Yufei
    ZHAO Xiaodong
    Wuhan University Journal of Natural Sciences, 2022, 27 (06) : 521 - 530
  • [10] Contrastive-weighted self-supervised model for long-tailed data classification with vision transformer augmented
    Hou, Rujie
    Chen, Jinglong
    Feng, Yong
    Liu, Shen
    He, Shuilong
    Zhou, Zitong
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2022, 177