MetaFormer and CNN Hybrid Model for Polyp Image Segmentation

被引:0
|
作者
Lee, Hyunnam [1 ]
Yoo, Juhan [2 ]
机构
[1] Incheon Int Airport Corp, Incheon 22382, South Korea
[2] Semyung Univ, Dept Elect Engn, Jecheon Si 27136, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Convolutional neural network; image segmentation; medical image processing; MetaFormer; polyp segmentation; vision transformer; VALIDATION;
D O I
10.1109/ACCESS.2024.3461754
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transformer-based methods have become dominant in the medical image research field since the Vision Transformer achieved superior performance. Although transformer-based approaches have resolved long-range dependency problems inherent in Convolutional Neural Network (CNN) methods, they struggle to capture local detail information. Recent research focuses on the robust combination of local detail and semantic information. To address this problem, we propose a novel transformer-CNN hybrid network named RAPUNet. The proposed approach employs MetaFormer as the transformer backbone and introduces a custom convolutional block, RAPU (Residual and Atrous Convolution in Parallel Unit), to enhance local features and alleviate the combination problem of local and global features. We evaluate the segmentation performance of RAPUNet on popular benchmarking datasets for polyp segmentation, including Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, EndoScene-CVC300, and ETIS-LaribPolypDB. Experimental results show that our model achieves competitive performance in terms of mean Dice and mean IoU. Particularly, RAPUNet outperforms state-of-the-art methods on the CVC-ClinicDB dataset. Code available: https://github.com/hyunnamlee/RAPUNet.
引用
收藏
页码:133694 / 133702
页数:9
相关论文
共 50 条
  • [1] SEGTRANSVAE: HYBRID CNN - TRANSFORMER WITH REGULARIZATION FOR MEDICAL IMAGE SEGMENTATION
    Quan-Dung Pham
    Hai Nguyen-Truong
    Nam Nguyen Phuong
    Nguyen, Khoa N. A.
    Nguyen, Chanh D. T.
    Bui, Trung
    Truong, Steven Q. H.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [2] A NEW CNN OSCILLATOR MODEL FOR PARALLEL IMAGE SEGMENTATION
    Strzelecki, Michal
    Kowalski, Jacek
    Kim, Hyongsuk
    Ko, Soohong
    INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS, 2008, 18 (07): : 1999 - 2015
  • [3] A hybrid model for semantic image segmentation
    Liu, S. W.
    Li, M.
    Duan, X. T.
    2015 3rd International Symposium on Computer, Communication, Control and Automation (3CA 2015), 2015, : 150 - 155
  • [4] TFCNs: A CNN-Transformer Hybrid Network for Medical Image Segmentation
    Li, Zihan
    Li, Dihan
    Xu, Cangbai
    Wang, Weice
    Hong, Qingqi
    Li, Qingde
    Tian, Jie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 781 - 792
  • [5] CNN-Transformer Hybrid Architecture for Underwater Sonar Image Segmentation
    Lei, Juan
    Wang, Huigang
    Lei, Zelin
    Li, Jiayuan
    Rong, Shaowei
    REMOTE SENSING, 2025, 17 (04)
  • [6] Chaotic CNN for image segmentation
    Lozowski, A
    Cholewo, TJ
    Jankowski, S
    Tworek, M
    1996 FOURTH IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND THEIR APPLICATIONS, PROCEEDINGS (CNNA-96), 1996, : 219 - 223
  • [7] Parallel Transformer-CNN Model for Medical Image Segmentation
    Zhou, Mingkun
    Nie, Xueyun
    Liu, Yuhang
    Li, Doudou
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 1048 - 1051
  • [8] Hybrid feature CNN model for point cloud classification and segmentation
    Zhang, Xinliang
    Fu, Chenlin
    Zhao, Yunji
    Xu, Xiaozhuo
    IET IMAGE PROCESSING, 2020, 14 (16) : 4086 - 4091
  • [9] Hybrid graphical model for semantic image segmentation
    Wang, Li-Li
    Yung, Nelson H. C.
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2015, 28 : 83 - 96
  • [10] An Improved Hybrid Model for Medical Image Segmentation
    Yang Feng
    Sun Xiaohuan
    Chen Guoyue
    Wen Tiexiang
    2008 11TH IEEE SINGAPORE INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS (ICCS), VOLS 1-3, 2008, : 367 - +