APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation

被引:19
|
作者
Chen, Jiacheng [1 ]
Gao, Bin-Bin [2 ]
Lu, Zongqing [1 ]
Xue, Jing-Hao [3 ]
Wang, Chengjie [2 ]
Liao, Qingmin [1 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen 518055, Peoples R China
[2] Tencent YouTu Lab, Shenzhen 518057, Peoples R China
[3] UCL, Dept Stat Sci, London WC1E 6BT, England
关键词
Contrastive learning; few-shot learning; metric learning; self-supervised learning; semantic segmentation;
D O I
10.1109/TMM.2022.3174405
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images. Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype. However, this framework suffers from biased classification due to incomplete feature comparisons. To address this issue, we present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes and thus construct complete sample pairs for learning semantic alignment with query features. The complementary features learning manner effectively enriches feature comparison and helps yield an unbiased segmentation model in the few-shot setting. It is implemented with a two-branch end-to-end network (i.e., a class-specific branch and a class-agnostic branch), which generates prototypes and then combines query features to perform comparisons. In addition, the proposed class-agnostic branch is simple yet effective. In practice, it can adaptively generate multiple class-agnostic prototypes for query images and learn feature alignment in a self-contrastive manner. Extensive experiments on PASCAL-5(i) and COCO-20(i) demonstrate the superiority of our method. At no expense of inference efficiency, our model achieves state-of-the-art results in both 1-shot and 5-shot settings for semantic segmentation.
引用
收藏
页码:4361 / 4373
页数:13
相关论文
共 50 条
  • [41] Self-support Few-Shot Semantic Segmentation
    Fan, Qi
    Pei, Wenjie
    Tai, Yu-Wing
    Tang, Chi-Keung
    COMPUTER VISION, ECCV 2022, PT XIX, 2022, 13679 : 701 - 719
  • [42] Query semantic reconstruction for background in few-shot segmentation
    Haoyan Guan
    Michael Spratling
    The Visual Computer, 2024, 40 (2) : 799 - 810
  • [43] Few-Shot Semantic Segmentation via Mask Aggregation
    Ao, Wei
    Zheng, Shunyi
    Meng, Yan
    Yang, Yang
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [44] Query semantic reconstruction for background in few-shot segmentation
    Guan, Haoyan
    Spratling, Michael
    VISUAL COMPUTER, 2024, 40 (02): : 799 - 810
  • [45] Incorporating Depth Information into Few-Shot Semantic Segmentation
    Zhang, Yifei
    Sidibe, Desire
    Morel, Olivier
    Meriaudeau, Fabrice
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 3582 - 3588
  • [46] Dynamic Extension Nets for Few-shot Semantic Segmentation
    Liu, Lizhao
    Cao, Junyi
    Liu, Minqian
    Guo, Yong
    Chen, Qi
    Tan, Mingkui
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1441 - 1449
  • [47] Few-shot semantic segmentation: a review on recent approaches
    Zhaobin Chang
    Yonggang Lu
    Xingcheng Ran
    Xiong Gao
    Xiangwen Wang
    Neural Computing and Applications, 2023, 35 : 18251 - 18275
  • [48] Few-Shot Semantic Segmentation for Complex Driving Scenes
    Zhou, Jingxing
    Chen, Ruei-Bo
    Beyerer, Juergen
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 695 - 702
  • [49] Prediction Calibration for Generalized Few-Shot Semantic Segmentation
    Lu, Zhihe
    He, Sen
    Li, Da
    Song, Yi-Zhe
    Xiang, Tao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3311 - 3323
  • [50] Cross-Domain Few-Shot Semantic Segmentation
    Lei, Shuo
    Zhang, Xuchao
    He, Jianfeng
    Chen, Fanglan
    Du, Bowen
    Lu, Chang-Tien
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 73 - 90