A self-supervised learning model based on variational autoencoder for limited-sample mammogram classification

被引:1
|
作者
Karagoz, Meryem Altin [1 ,2 ,3 ]
Nalbantoglu, O. Ufuk [2 ,3 ,4 ]
机构
[1] Sivas Cumhuriyet Univ, Dept Comp Engn, Sivas, Turkiye
[2] Erciyes Univ, Dept Comp Engn, Kayseri, Turkiye
[3] Erciyes Univ, Artificial Intelligence & Big Data Applicat & Res, Kayseri, Turkiye
[4] Erciyes Univ, Genome & Stem Cell Ctr GenKok, Kayseri, Turkiye
关键词
Self-supervised learning; Mammography; Classification; Variational autoencoder;
D O I
10.1007/s10489-024-05358-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models have found extensive application in medical imaging analysis, particularly in mammography classification. However, these models encounter challenges associated with limited annotated mammography public datasets. In recent years, self-supervised learning (SSL) has emerged as a noteworthy solution to addressing data scarcity by leveraging pretext and downstream tasks. Nevertheless, we recognize a notable scarcity of self-supervised learning models designed for the classification task in mammography. In this context, we propose a novel self-supervised learning model for limited-sample mammogram classification. Our proposed SSL model comprises two primary networks. The first is a pretext task network designed to learn discriminative features through mammogram reconstruction using a variational autoencoder (VAE). Subsequently, the downstream network, dedicated to the classification of mammograms, uses the encoded space extracted by the VAE as input through a simple convolutional neural network. The performance of the proposed model is assessed on public INbreast and MIAS datasets. Comparative analyzes are conducted for the proposed model against previous studies for the same classification task and dataset. The proposed SSL model demonstrates high performance with an AUC of 0.94 for density, 0.99 for malignant-nonmalignant classifications on INbreast, 0.97 for benign-malignant, 0.99 for density, and 0.99 for normal-benign-malignant classifications on MIAS. Additionally, the proposed model reduces computational costs with only 228 trainable parameters, 204.95K FLOPs, and a depth of 3 in mammogram classification. Overall, the proposed SSL model exhibits a robust network architecture characterized by repeatability, consistency, generalization ability, and transferability among datasets, providing less computational complexity than previous studies.
引用
收藏
页码:3448 / 3463
页数:16
相关论文
共 50 条
  • [1] A self-supervised learning model based on variational autoencoder for limited-sample mammogram classification
    Meryem Altin Karagoz
    O. Ufuk Nalbantoglu
    [J]. Applied Intelligence, 2024, 54 : 3448 - 3463
  • [2] UNSUPERVISED ADAPTATION FOR HIGH-DIMENSIONAL WITH LIMITED-SAMPLE DATA CLASSIFICATION USING VARIATIONAL AUTOENCODER
    Mahmud, Mohammad Sultan
    Huang, Joshua Zhexue
    Fu, Xianghua
    Ruby, Rukhsana
    Wu, Kaishun
    [J]. COMPUTING AND INFORMATICS, 2021, 40 (01) : 1 - 28
  • [3] Self-Supervised Learning Malware Traffic Classification Based on Masked Autoencoder
    Xu, Ke
    Zhang, Xixi
    Wang, Yu
    Ohtsuki, Tomoaki
    Adebisi, Bamidele
    Sari, Hikmet
    Gui, Guan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 17330 - 17340
  • [4] Self-supervised Variational Autoencoder for Recommender Systems
    Wang, Jing
    Liu, Gangdu
    Wu, Jun
    Jia, Caiyan
    Zhang, Zhifei
    [J]. 2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 831 - 835
  • [5] Self-supervised variational autoencoder towards recommendation by nested contrastive learning
    Jing Wang
    Jun Wu
    Caiyan Jia
    Zhifei Zhang
    [J]. Applied Intelligence, 2023, 53 : 18887 - 18897
  • [6] Self-supervised variational autoencoder towards recommendation by nested contrastive learning
    Wang, Jing
    Wu, Jun
    Jia, Caiyan
    Zhang, Zhifei
    [J]. APPLIED INTELLIGENCE, 2023, 53 (15) : 18887 - 18897
  • [7] Medical Image Classification Using Self-Supervised Learning-Based Masked Autoencoder
    Fan, Zong
    Wang, Zhimin
    Gong, Ping
    Lee, Christine U.
    Tang, Shanshan
    Zhang, Xiaohui
    Hao, Yao
    Zhang, Zhongwei
    Song, Pengfei
    Chen, Shigao
    Li, Hua
    [J]. MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926
  • [8] DISENTANGLED SPEECH REPRESENTATION LEARNING BASED ON FACTORIZED HIERARCHICAL VARIATIONAL AUTOENCODER WITH SELF-SUPERVISED OBJECTIVE
    Xie, Yuying
    Arildsen, Thomas
    Tan, Zheng-Hua
    [J]. 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [9] Self-supervised learning for tool wear monitoring with a disentangled-variational-autoencoder
    von Hahn, Tim
    Mechefske, Chris K.
    [J]. INTERNATIONAL JOURNAL OF HYDROMECHATRONICS, 2021, 4 (01) : 69 - 98
  • [10] Self-Supervised Learning for Point-Cloud Classification by a Multigrid Autoencoder
    Zhai, Ruifeng
    Song, Junfeng
    Hou, Shuzhao
    Gao, Fengli
    Li, Xueyan
    [J]. SENSORS, 2022, 22 (21)