Attention-based Fusion Network for Breast Cancer Segmentation and Classification Using Multi-modal Ultrasound Images

被引:1
|
作者
Cho, Yoonjae [1 ,2 ,3 ]
Misra, Sampa [1 ,2 ,3 ]
Managuli, Ravi [4 ]
Barr, Richard G. [5 ]
Lee, Jeongmin [6 ,7 ]
Kim, Chulhong [1 ,2 ,3 ,8 ]
机构
[1] Pohang Univ Sci & Technol, Med Device Innovat Ctr, Mech Engn, Convergence IT Engn,Dept Elect Engn, Pohang 37673, South Korea
[2] Pohang Univ Sci & Technol, Grad Sch Artificial Intelligence, Pohang 37673, South Korea
[3] Pohang Univ Sci & Technol, Med Device Innovat Ctr, Pohang, South Korea
[4] Univ Washington, Dept Bioengn, Seattle, WA USA
[5] Southwoods Imaging, Youngstown, OH USA
[6] Sungkyunkwan Univ, Sch Med, Dept Radiol, Seoul, South Korea
[7] Sungkyunkwan Univ, Ctr Imaging Sci, Samsung Med Ctr, Sch Med, Seoul, South Korea
[8] Opticho Inc, Pohang, South Korea
来源
ULTRASOUND IN MEDICINE AND BIOLOGY | 2025年 / 51卷 / 03期
基金
新加坡国家研究基金会;
关键词
Breast cancer; Breast ultrasound images; Multi-modality; Classification; Segmentation; Transfer learning; BENIGN;
D O I
10.1016/j.ultrasmedbio.2024.11.020
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Objective: Breast cancer is one of the most commonly occurring cancers in women. Thus, early detection and treatment of cancer lead to a better outcome for the patient. Ultrasound (US) imaging plays a crucial role in the early detection of breast cancer, providing a cost-effective, convenient, and safe diagnostic approach. To date, much research has been conducted to facilitate reliable and effective early diagnosis of breast cancer through US image analysis. Recently, with the introduction of machine learning technologies such as deep learning (DL), automated lesion segmentation and classification, the identification of malignant masses in US breasts has progressed, and computer-aided diagnosis (CAD) technology is being applied in clinics effectively. Herein, we propose a novel deep learning-based "segmentation + classification" model based on B- and SE-mode images. Methods: For the segmentation task, we propose a Multi-Modal Fusion U-Net (MMF-U-Net), which segments lesions by mixing B- and SE-mode information through fusion blocks. After segmenting, the lesion area from the B- and SE-mode images is cropped using a predicted segmentation mask. The encoder part of the pre-trained MMF-U-Net model is then used on the cropped B- and SE-mode breast US images to classify benign and malignant lesions. Results: The experimental results using the proposed method showed good segmentation and classification scores. The dice score, intersection over union (IoU), precision, and recall are 78.23%, 68.60%, 82.21%, and 80.58%, respectively, using the proposed MMF-U-Net on real-world clinical data. The classification accuracy is 98.46%. Conclusion: Our results show that the proposed method will effectively segment the breast lesion area and can reliably classify the benign from malignant lesions.
引用
收藏
页码:568 / 577
页数:10
相关论文
共 50 条
  • [41] Breast cancer classification based on convolutional neural network and image fusion approaches using ultrasound images
    Alotaibi, Mohammed
    Aljouie, Abdulrhman
    Alluhaidan, Najd
    Qureshi, Wasem
    Almatar, Hessa
    Alduhayan, Reema
    Alsomaie, Barrak
    Almazroa, Ahmed
    HELIYON, 2023, 9 (11)
  • [42] Flexible Fusion Network for Multi-Modal Brain Tumor Segmentation
    Yang, Hengyi
    Zhou, Tao
    Zhou, Yi
    Zhang, Yizhe
    Fu, Huazhu
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (07) : 3349 - 3359
  • [43] TAG-fusion: Two-stage attention guided multi-modal fusion network for semantic segmentation
    Zhang, Zhizhou
    Wang, Wenwu
    Zhu, Lei
    Tang, Zhibin
    DIGITAL SIGNAL PROCESSING, 2025, 156
  • [44] Multi-modal fusion network with intra- and inter-modality attention for prognosis prediction in breast cancer
    Liu, Honglei
    Shi, Yi
    Li, Ao
    Wang, Minghui
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 168
  • [45] Efficient Segmentation of Multi-modal Optoacoustic and Ultrasound Images Using Convolutional Neural Networks
    Lafci, Berkan
    Mercep, Elena
    Morscher, Stefan
    Dean-Ben, Xose Luis
    Razansky, Daniel
    PHOTONS PLUS ULTRASOUND: IMAGING AND SENSING 2020, 2020, 11240
  • [46] Joint-individual fusion structure with fusion attention module for multi-modal skin cancer classification
    Tang, Peng
    Yan, Xintong
    Nan, Yang
    Hu, Xiaobin
    Menze, Bjoern H.
    Krammer, Sebastian
    Lasser, Tobias
    PATTERN RECOGNITION, 2024, 154
  • [47] Multi-modal Fusion Network for Rumor Detection with Texts and Images
    Li, Boqun
    Qian, Zhong
    Li, Peifeng
    Zhu, Qiaoming
    MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 15 - 27
  • [48] Self-supervised multi-modal fusion network for multi-modal thyroid ultrasound image diagnosis
    Xiang, Zhuo
    Zhuo, Qiuluan
    Zhao, Cheng
    Deng, Xiaofei
    Zhu, Ting
    Wang, Tianfu
    Jiang, Wei
    Lei, Baiying
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
  • [49] Defect classification for specular surfaces based on deflectometry and multi-modal fusion network
    Guan, Jingtian
    Fei, Jingjing
    Li, Wei
    Jiang, Xiaoke
    Wu, Liwei
    Liu, Yakun
    Xi, Juntong
    OPTICS AND LASERS IN ENGINEERING, 2023, 163
  • [50] Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN
    Huddar, Mahesh G.
    Sannakki, Sanjeev S.
    Rajpurohit, Vijay S.
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2021, 6 (06): : 112 - 121