Attention-based Fusion Network for Breast Cancer Segmentation and Classification Using Multi-modal Ultrasound Images

被引:1
|
作者
Cho, Yoonjae [1 ,2 ,3 ]
Misra, Sampa [1 ,2 ,3 ]
Managuli, Ravi [4 ]
Barr, Richard G. [5 ]
Lee, Jeongmin [6 ,7 ]
Kim, Chulhong [1 ,2 ,3 ,8 ]
机构
[1] Pohang Univ Sci & Technol, Med Device Innovat Ctr, Mech Engn, Convergence IT Engn,Dept Elect Engn, Pohang 37673, South Korea
[2] Pohang Univ Sci & Technol, Grad Sch Artificial Intelligence, Pohang 37673, South Korea
[3] Pohang Univ Sci & Technol, Med Device Innovat Ctr, Pohang, South Korea
[4] Univ Washington, Dept Bioengn, Seattle, WA USA
[5] Southwoods Imaging, Youngstown, OH USA
[6] Sungkyunkwan Univ, Sch Med, Dept Radiol, Seoul, South Korea
[7] Sungkyunkwan Univ, Ctr Imaging Sci, Samsung Med Ctr, Sch Med, Seoul, South Korea
[8] Opticho Inc, Pohang, South Korea
来源
ULTRASOUND IN MEDICINE AND BIOLOGY | 2025年 / 51卷 / 03期
基金
新加坡国家研究基金会;
关键词
Breast cancer; Breast ultrasound images; Multi-modality; Classification; Segmentation; Transfer learning; BENIGN;
D O I
10.1016/j.ultrasmedbio.2024.11.020
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Objective: Breast cancer is one of the most commonly occurring cancers in women. Thus, early detection and treatment of cancer lead to a better outcome for the patient. Ultrasound (US) imaging plays a crucial role in the early detection of breast cancer, providing a cost-effective, convenient, and safe diagnostic approach. To date, much research has been conducted to facilitate reliable and effective early diagnosis of breast cancer through US image analysis. Recently, with the introduction of machine learning technologies such as deep learning (DL), automated lesion segmentation and classification, the identification of malignant masses in US breasts has progressed, and computer-aided diagnosis (CAD) technology is being applied in clinics effectively. Herein, we propose a novel deep learning-based "segmentation + classification" model based on B- and SE-mode images. Methods: For the segmentation task, we propose a Multi-Modal Fusion U-Net (MMF-U-Net), which segments lesions by mixing B- and SE-mode information through fusion blocks. After segmenting, the lesion area from the B- and SE-mode images is cropped using a predicted segmentation mask. The encoder part of the pre-trained MMF-U-Net model is then used on the cropped B- and SE-mode breast US images to classify benign and malignant lesions. Results: The experimental results using the proposed method showed good segmentation and classification scores. The dice score, intersection over union (IoU), precision, and recall are 78.23%, 68.60%, 82.21%, and 80.58%, respectively, using the proposed MMF-U-Net on real-world clinical data. The classification accuracy is 98.46%. Conclusion: Our results show that the proposed method will effectively segment the breast lesion area and can reliably classify the benign from malignant lesions.
引用
收藏
页码:568 / 577
页数:10
相关论文
共 50 条
  • [31] Latent Edge Guided Depth Super-Resolution Using Attention-Based Hierarchical Multi-Modal Fusion
    Lan, Hui
    Jung, Cheolkon
    IEEE ACCESS, 2024, 12 : 114512 - 114526
  • [32] MULTI-MODAL HIERARCHICAL ATTENTION-BASED DENSE VIDEO CAPTIONING
    Munusamy, Hemalatha
    Sekhar, Chandra C.
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 475 - 479
  • [33] DFAMNet: dual fusion attention multi-modal network for semantic segmentation on LiDAR point clouds
    Mingjie Li
    Gaihua Wang
    Minghao Zhu
    Chunzheng Li
    Hong Liu
    Xuran Pan
    Qian Long
    Applied Intelligence, 2024, 54 : 3169 - 3180
  • [34] Automatic depression prediction via cross-modal attention-based multi-modal fusion in social networks
    Wang, Lidong
    Zhang, Yin
    Zhou, Bin
    Cao, Shihua
    Hu, Keyong
    Tan, Yunfei
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 118
  • [35] Densely Convolutional Networks for Breast Cancer Classification with Multi-Modal Image Fusion
    Hamdy, Eman
    Badawy, Osama
    Zaghloul, Mohamed
    INTERNATIONAL ARAB JOURNAL OF INFORMATION TECHNOLOGY, 2022, 19 (3A) : 463 - 469
  • [36] Deep learning supported breast cancer classification with multi-modal image fusion
    Hamdy, Eman
    Zaghloul, Mohamed Saad
    Badawy, Osama
    2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 319 - 325
  • [37] DFAMNet: dual fusion attention multi-modal network for semantic segmentation on LiDAR point clouds
    Li, Mingjie
    Wang, Gaihua
    Zhu, Minghao
    Li, Chunzheng
    Liu, Hong
    Pan, Xuran
    Long, Qian
    APPLIED INTELLIGENCE, 2024, 54 (04) : 3169 - 3180
  • [38] Fusion based on attention mechanism and context constraint for multi-modal brain tumor segmentation
    Zhou, Tongxue
    Canu, Stephane
    Ruan, Su
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2020, 86
  • [39] MSAFusionNet: Multiple Subspace Attention Based Deep Multi-modal Fusion Network
    Zhang, Sen
    Zhang, Changzheng
    Wang, Lanjun
    Li, Cixing
    Tu, Dandan
    Luo, Rui
    Qi, Guojun
    Luo, Jiebo
    MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 : 54 - 62
  • [40] Optimal segmentation and fusion of multi-modal brain images using clustering based deep learning algorithm
    Vijendran A.S.
    Ramasamy K.
    Measurement: Sensors, 2023, 27