APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation

被引:19
|
作者
Chen, Jiacheng [1 ]
Gao, Bin-Bin [2 ]
Lu, Zongqing [1 ]
Xue, Jing-Hao [3 ]
Wang, Chengjie [2 ]
Liao, Qingmin [1 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen 518055, Peoples R China
[2] Tencent YouTu Lab, Shenzhen 518057, Peoples R China
[3] UCL, Dept Stat Sci, London WC1E 6BT, England
关键词
Contrastive learning; few-shot learning; metric learning; self-supervised learning; semantic segmentation;
D O I
10.1109/TMM.2022.3174405
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images. Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype. However, this framework suffers from biased classification due to incomplete feature comparisons. To address this issue, we present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes and thus construct complete sample pairs for learning semantic alignment with query features. The complementary features learning manner effectively enriches feature comparison and helps yield an unbiased segmentation model in the few-shot setting. It is implemented with a two-branch end-to-end network (i.e., a class-specific branch and a class-agnostic branch), which generates prototypes and then combines query features to perform comparisons. In addition, the proposed class-agnostic branch is simple yet effective. In practice, it can adaptively generate multiple class-agnostic prototypes for query images and learn feature alignment in a self-contrastive manner. Extensive experiments on PASCAL-5(i) and COCO-20(i) demonstrate the superiority of our method. At no expense of inference efficiency, our model achieves state-of-the-art results in both 1-shot and 5-shot settings for semantic segmentation.
引用
收藏
页码:4361 / 4373
页数:13
相关论文
共 50 条
  • [21] Adaptive similarity-guided self-merging network for few-shot semantic segmentation
    Liu, Yu
    Guo, Yingchun
    Zhu, Ye
    Yu, Ming
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 119
  • [22] POEM: A prototype cross and emphasis network for few-shot semantic segmentation
    Cheng, Xu
    Li, Haoyuan
    Deng, Shuya
    Peng, Yonghong
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 234
  • [23] Relevant Intrinsic Feature Enhancement Network for Few-Shot Semantic Segmentation
    Bao, Xiaoyi
    Qin, Jie
    Sun, Siyang
    Wang, Xingang
    Zheng, Yun
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 765 - 773
  • [24] Few-Shot Semantic Segmentation via Frequency Guided Neural Network
    Rao, Xiya
    Lu, Tao
    Wang, Zhongyuan
    Zhang, Yanduo
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1092 - 1096
  • [25] Self-regularized prototypical network for few-shot semantic segmentation
    Ding, Henghui
    Zhang, Hui
    Jiang, Xudong
    PATTERN RECOGNITION, 2023, 133
  • [26] LEARNING WITH MEMORY FOR FEW-SHOT SEMANTIC SEGMENTATION
    Lu, Hongchao
    Wei, Chao
    Deng, Zhidong
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 629 - 633
  • [27] CLIP-Driven Prototype Network for Few-Shot Semantic Segmentation
    Guo, Shi-Cheng
    Liu, Shang-Kun
    Wang, Jing-Yu
    Zheng, Wei-Min
    Jiang, Cheng-Yu
    ENTROPY, 2023, 25 (09)
  • [28] MGNet: Mutual-guidance network for few-shot semantic segmentation
    Chang, Zhaobin
    Lu, Yonggang
    Wang, Xiangwen
    Ran, Xingcheng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [29] Few-shot 3D Point Cloud Semantic Segmentation with Prototype Alignment
    Wei, Maolin
    PROCEEDINGS OF 2023 8TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES, ICMLT 2023, 2023, : 195 - 200
  • [30] KLSANet: Key local semantic alignment Network for few-shot image classification
    Sun, Zhe
    Zheng, Wang
    Guo, Pengfei
    NEURAL NETWORKS, 2024, 178