Multi-model CNN fusion for sperm morphology analysis

被引:22
|
作者
Yuzkat, Mecit [1 ,2 ]
Ilhan, Hamza Osman [1 ]
Aydin, Nizamettin [1 ]
机构
[1] Yildiz Tech Univ, Fac Elect & Elect, Dept Comp Engn, Istanbul, Turkey
[2] Mus Alparslan Univ, Fac Engn & Architecture, Dept Comp Engn, Mus, Turkey
关键词
Sperm morphology; Convolutional neural network (CNN); Data augmentation; Decision level fusion; CONVOLUTIONAL NEURAL-NETWORKS; GOLD-STANDARD; SEMEN; CLASSIFICATION; MOTILITY; QUALITY; HEAD; SEGMENTATION; ACROSOME;
D O I
10.1016/j.compbiomed.2021.104790
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Infertility is a common disorder affecting 20% of couples worldwide. Furthermore, 40% of all cases are related to male infertility. The first step in the determination of male infertility is semen analysis. The morphology, concentration, and motility of sperm are important characteristics evaluated by experts during semen analysis. Most laboratories perform the tests manually. However, manual semen analysis requires much time and is subject to observer variability during the evaluation. Therefore, computer-assisted systems are required. Additionally, to obtain more objective results, a large amount of data is necessary. Deep learning networks, which have become popular in recent years, are used for processing and analysing such quantities of data. Convolutional neural networks (CNNs) are a class of deep learning algorithm that are used extensively for processing and analysing images. In this study, six different CNN models were created for completely automating the morphological classification of sperm images. Additionally, two decision-level fusion techniques namely hard-voting and softvoting were applied over these CNNs. To evaluate the performance of the proposed approach, three publicly available sperm morphology data sets were used in the experimental tests. For an objective analysis, a crossvalidation technique was applied by dividing the data sets into five sub-sets. In addition, various data augmentation scales and mini-batch analysis were employed to obtain the highest classification accuracies. Finally, in the classification, accuracies 90.73%, 85.18% and 71.91% were obtained for the SMIDS, HuSHeM and SCIAN-Morpho data sets, respectively, using the soft-voting based fusion approach over the six created CNN models. The results suggested that the proposed approach could automatically classify as well as achieve high success in three different data sets.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Research on Flight delay Prediction based on Multi-Model Fusion
    Mang, Chen
    Chen, Yunli
    PROCEEDINGS OF 2020 IEEE 5TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2020), 2020, : 725 - 730
  • [22] Lane Change Intention Recognition Based on Multi-Model Fusion
    Fang, Yijie
    Liao, Zhuhua
    Huang, Haokai
    Li, Yanjun
    Computer Engineering and Applications, 2024, 60 (02) : 344 - 352
  • [23] Research on Photovoltaic Power Prediction Based on Multi-model Fusion
    Chen, Jiaqi
    Gao, Qiang
    Ji, Yuehui
    Xu, Zhao
    Liu, Junjie
    PROCEEDINGS OF THE 4TH INTERNATIONAL SYMPOSIUM ON NEW ENERGY AND ELECTRICAL TECHNOLOGY, ISNEET 2023, 2024, 1255 : 59 - 67
  • [24] A MULTI-MODEL FUSION FRAMEWORK FOR NIR-TO-RGB TRANSLATION
    Yan, Longbin
    Wang, Xiuheng
    Zhao, Min
    Liu, Shumin
    Chen, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2020, : 459 - 462
  • [25] Abnormal gesture recognition based on multi-model fusion strategy
    Lin, Chi
    Lin, Xuxin
    Xie, Yiliang
    Liang, Yanyan
    MACHINE VISION AND APPLICATIONS, 2019, 30 (05) : 889 - 900
  • [26] Pathogenic virus detection method based on multi-model fusion
    Zhao, Xiaoyong
    Wang, Jingwei
    PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2020, : 89 - 92
  • [27] Multi-model fusion metric learning for image set classification
    Gao, Xizhan
    Sun, Quansen
    Xu, Haitao
    Wei, Dong
    Gao, Jianqgang
    KNOWLEDGE-BASED SYSTEMS, 2019, 164 : 253 - 264
  • [28] Video-level Multi-model Fusion for Action Recognition
    Wang, Xiaomin
    Zhang, Junsan
    Wang, Leiquan
    Yu, Philip S.
    Zhu, Jie
    Li, Haisheng
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 159 - 168
  • [29] Multi-model Fusion Attention Network for News Text Classification
    Li Z.
    Wu J.
    Miao J.
    Yu X.
    Li S.
    International Journal for Engineering Modelling, 2022, 35 (02) : 1 - 15
  • [30] Battery state of charge estimation based on multi-model fusion
    Wang, Qiang
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 2036 - 2041