Few-shot object segmentation with a new feature aggregation module

被引:4
|
作者
Liu, Kaijun [1 ]
Lyu, Shujing [1 ]
Shivakumara, Palaiahnakote [2 ]
Lu, Yue [1 ]
机构
[1] East China Normal Univ, Shanghai Key Lab Multidimens Informat Proc, Shanghai, Peoples R China
[2] Univ Malaya, Fac Comp Sci & Informat Technol, Kuala Lumpur, Malaysia
关键词
Few-shot learning; Object segmentation; Feature aggregation; Attention mechanism;
D O I
10.1016/j.displa.2023.102459
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The success of convolutional neural network for object segmentation depends on a large amount of training data and high-quality samples. But annotating such high-quality training data for pixel-wise segmentation is labor-intensive. To reduce the massive labor work, few-shot learning has been introduced to segment objects, which uses a few samples for training without compromising the performance. However, the current few-shot models are biased towards the seen classes rather than being class-irrelevant due to lack of global context prior attention. Therefore, this study aims at proposing a few-shot object segmentation model with a new feature aggregation module. Specifically, the proposed work develops a detail-aware module to enhance the discrimination of details with diversified attributes. To enhance the semantics of each pixel, we propose a global attention module to aggregate detailed features containing semantic information. Furthermore, to improve the performance of the proposed model, the model uses support samples that represents class-specific prototype obtained by respective category prototype block. Next, the proposed model predicts label of each pixel of query sample by estimating the distance between the pixel and prototypes. Experiments on standard datasets demonstrate significance of the proposed model over SOTA in terms of segmentation with a few training samples.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Few-Shot Object Detection via Variational Feature Aggregation
    Han, Jiaming
    Ren, Yuqiang
    Ding, Jian
    Yan, Ke
    Xia, Gui-Song
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 755 - 763
  • [2] A New Local Transformation Module for Few-Shot Segmentation
    Yang, Yuwei
    Meng, Fanman
    Li, Hongliang
    Wu, Qingbo
    Xu, Xiaolong
    Chen, Shuai
    MULTIMEDIA MODELING (MMM 2020), PT II, 2020, 11962 : 76 - 87
  • [3] Feature Weighting and Boosting for Few-Shot Segmentation
    Khoi Nguyen
    Todorovic, Sinisa
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 622 - 631
  • [4] Better Class Feature Representation for Few-Shot Object Detection: Feature Aggregation and Feature Space Redistribution
    Zhang, Wen
    Xu, Yuping
    Chen, Guorui
    Li, Zhijiang
    JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY, 2023, 67 (02)
  • [5] Few-shot Object Detection with Feature Attention Highlight Module in Remote Sensing Images
    Xiao, Zixuan
    Zhong, Ping
    Quan, Yuan
    Yin, Xuping
    Xue, Wei
    2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [6] Few-Shot Semantic Segmentation via Mask Aggregation
    Ao, Wei
    Zheng, Shunyi
    Meng, Yan
    Yang, Yang
    NEURAL PROCESSING LETTERS, 2024, 56 (02)
  • [7] Few-Shot Semantic Segmentation via Mask Aggregation
    Wei Ao
    Shunyi Zheng
    Yan Meng
    Yang Yang
    Neural Processing Letters, 56
  • [8] Few-shot video object segmentation with prototype evolution
    Mao, Binjie
    Liu, Xiyan
    Shi, Linsu
    Yu, Jiazhong
    Li, Fei
    Xiang, Shiming
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (10): : 5367 - 5382
  • [9] Few-shot video object segmentation with prototype evolution
    Binjie Mao
    Xiyan Liu
    Linsu Shi
    Jiazhong Yu
    Fei Li
    Shiming Xiang
    Neural Computing and Applications, 2024, 36 : 5367 - 5382
  • [10] Efficient Feature Enhancement for Few-Shot Object Detection
    Li, Lin
    Lei, Zhou
    Chen, Shengbo
    Xu, Qingguo
    2022 IEEE 6TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2022, : 1206 - 1210